Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How to evaluate the trained model? Is there a test.py ? #69

Open
liluying1996 opened this issue Jul 5, 2023 · 6 comments
Open

How to evaluate the trained model? Is there a test.py ? #69

liluying1996 opened this issue Jul 5, 2023 · 6 comments

Comments

@liluying1996
Copy link

No description provided.

@rongtongxueya
Copy link

I have the same question

@AlanLowe007
Copy link

May I ask how you solved this problem

@baiyuting
Copy link

you could consider https://github.com/salaniz/pycocoevalcap for evaluation

@rongtongxueya
Copy link

you could consider https://github.com/salaniz/pycocoevalcap for evaluation

thanks,but i have a question.Does the evaluation produce only one label instead of five?

@baiyuting
Copy link

baiyuting commented Jul 19, 2023 via email

@rongtongxueya
Copy link

Thank you for your reply. This format seems to be the format when extracting features with CLIP. My question is is this the same for inference validation evaluation, is the inference generated caption saved in this format, is the computation BLUE, etc.?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants