You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
LLAVA-CRITIC is an interesting job, thanks for sharing!
However, I had some questions about it. I noticed that the training data for the model had been shared, but the benchmark for In-domain Pointwise Scoring was not found, as mentioned in section 5.1 of the paper "we select 7 popular multimodal benchmarks and collect candidate responses from 13 commonly used LMMs alongside their GPT-4o evaluations, resulting in a total of 14174 examples"
Looking forward to your reply, thank you!
The text was updated successfully, but these errors were encountered:
LLAVA-CRITIC is an interesting job, thanks for sharing!
However, I had some questions about it. I noticed that the training data for the model had been shared, but the benchmark for In-domain Pointwise Scoring was not found, as mentioned in section 5.1 of the paper "we select 7 popular multimodal benchmarks and collect candidate responses from 13 commonly used LMMs alongside their GPT-4o evaluations, resulting in a total of 14174 examples"
Looking forward to your reply, thank you!
The text was updated successfully, but these errors were encountered: