Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Results using your code are different to the reported in the ICCV paper #68

Open
bobetocalo opened this issue May 26, 2020 · 0 comments

Comments

@bobetocalo
Copy link

I have tried to obtain similar results to the ones reported in the HBB table:

Task2 - Horizontal Leaderboard

Approaches mAP PL BD BR GTF SV LV SH TC BC ST SBF RA HA SP HC
[R2CNN++] 75.35 90.18 81.88 55.30 73.29 72.09 77.65 78.06 90.91 82.44 86.39 64.53 63.45 75.77 78.21 60.11

I am using the validation set instead of the testing set because of the test annotations have not been released yed ... Could you provide the results with the validation set? Because they are too much worse compared to the ones that you report in the table ...

Am I missing something? I didn't change anything in your eval.py code .... But the mAP results are really dissappointing. I would like to know if someone has obtained similar results to the ones that the authors report.

Best,
Roberto Valle

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant