Skip to content

Commit

Permalink
vo_gpu_next: raise LUT file max size and report an error if exceeded
Browse files Browse the repository at this point in the history
 First of all, `lutdata` was passed to `pl_lut_parse_cube()` even if
 it was empty. This caused a bogus error message to be printed by
 the LUT parsing code in libplacebo.

 "[vo/gpu-next/libplacebo] Missing LUT size specification?"

 One of the reasons why `lutdata` can be empty is if the LUT file size
 exceeds the `max_size` limit passed to `stream_read_file()`.

 So in this commit, `lutdata` is checked first before getting passed to
 libplacebo, and a non-bogus error message is printed if it's empty.

 And secondly, the max size allowed is raised from the quite limiting
 value of 100MB to 1.5GiB. The new max matches the limit in LUT cache
 which is used to support up to 512x512x512 LUTs.

Signed-off-by: Mohammad AlSaleh <[email protected]>
  • Loading branch information
MoSal committed Oct 22, 2024
1 parent 1edc021 commit a933be9
Showing 1 changed file with 8 additions and 2 deletions.
10 changes: 8 additions & 2 deletions video/out/vo_gpu_next.c
Original file line number Diff line number Diff line change
Expand Up @@ -2058,8 +2058,14 @@ static void update_lut(struct priv *p, struct user_lut *lut)
// Load LUT file
char *fname = mp_get_user_path(NULL, p->global, lut->path);
MP_VERBOSE(p, "Loading custom LUT '%s'\n", fname);
struct bstr lutdata = stream_read_file(fname, p, p->global, 100000000); // 100 MB
lut->lut = pl_lut_parse_cube(p->pllog, lutdata.start, lutdata.len);
const int lut_max_size = 1536 << 20; // 1.5 GiB, matches lut cache limit
struct bstr lutdata = stream_read_file(fname, NULL, p->global, lut_max_size);
if (!lutdata.len) {
MP_ERR(p, "Failed to read LUT data from %s, make sure it's a valid file "
"and smaller or equal to %d bytes\n", fname, lut_max_size);
} else {
lut->lut = pl_lut_parse_cube(p->pllog, lutdata.start, lutdata.len);
}
talloc_free(fname);
talloc_free(lutdata.start);
}
Expand Down

0 comments on commit a933be9

Please sign in to comment.