You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I have tried to obtain similar results to the ones reported in the HBB table:
Task2 - Horizontal Leaderboard
Approaches
mAP
PL
BD
BR
GTF
SV
LV
SH
TC
BC
ST
SBF
RA
HA
SP
HC
[R2CNN++]
75.35
90.18
81.88
55.30
73.29
72.09
77.65
78.06
90.91
82.44
86.39
64.53
63.45
75.77
78.21
60.11
I am using the validation set instead of the testing set because of the test annotations have not been released yed ... Could you provide the results with the validation set? Because they are too much worse compared to the ones that you report in the table ...
Am I missing something? I didn't change anything in your eval.py code .... But the mAP results are really dissappointing. I would like to know if someone has obtained similar results to the ones that the authors report.
Best,
Roberto Valle
The text was updated successfully, but these errors were encountered:
I have tried to obtain similar results to the ones reported in the HBB table:
Task2 - Horizontal Leaderboard
I am using the validation set instead of the testing set because of the test annotations have not been released yed ... Could you provide the results with the validation set? Because they are too much worse compared to the ones that you report in the table ...
Am I missing something? I didn't change anything in your eval.py code .... But the mAP results are really dissappointing. I would like to know if someone has obtained similar results to the ones that the authors report.
Best,
Roberto Valle
The text was updated successfully, but these errors were encountered: