-
Notifications
You must be signed in to change notification settings - Fork 217
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
How to evaluate the trained model? Is there a test.py ? #69
Comments
I have the same question |
May I ask how you solved this problem |
you could consider https://github.com/salaniz/pycocoevalcap for evaluation |
thanks,but i have a question.Does the evaluation produce only one label instead of five? |
change original data image1:(caption1,caption2,caption3,caption4,caption5)
to
image1:caption1
image1:caption2
image1:caption3
image1:caption4
image1:caption5
…---- Replied Message ----
| From | ***@***.***> |
| Date | 07/17/2023 16:57 |
| To | ***@***.***> |
| Cc | ***@***.***>***@***.***> |
| Subject | Re: [rmokady/CLIP_prefix_caption] How to evaluate the trained model? Is there a test.py ? (Issue #69) |
you could consider https://github.com/salaniz/pycocoevalcap for evaluation
thanks,but i have a question.Does the evaluation produce only one label instead of five?
—
Reply to this email directly, view it on GitHub, or unsubscribe.
You are receiving this because you commented.Message ID: ***@***.***>
|
Thank you for your reply. This format seems to be the format when extracting features with CLIP. My question is is this the same for inference validation evaluation, is the inference generated caption saved in this format, is the computation BLUE, etc.? |
No description provided.
The text was updated successfully, but these errors were encountered: