You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi, sorry for my delay. In my experiments, setting batch size big enough (at least 150) was necessary to get high accuracy. It is because the triplet loss uses every composable triplet from each minibatch, and therefore, larger batch size means more triplets are seen in each iteration.
If it is hard to increase the batch size, how about trying hard sample mining to enlarge the number of effective triplets?
Hi, thanks for nice code.
However, I only change the batch size to 32 (origin code has 250), since the memory issue.
Then, the results is lower than that I thought.
(top1: 80.1% / top5: 92.2%/ top10:95.5%/Test Mean: 61.8%)
How can I do for improve the performance?
Thanks.
The text was updated successfully, but these errors were encountered: