-
-
Notifications
You must be signed in to change notification settings - Fork 105
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
why stop gradient? #37
Comments
tbh, I don't exactly remember the details. You can try remove stopgrad and compare the peformance |
Thank you for your reply and I get the answer. |
https://github.com/ethanhe42/Stronger-yolo-pytorch/blob/master/models/yololoss.py#L101 |
Hello.
may I ask why the gradient of the sigma prediction branch is stopped to the bbox branch in the program?
If it is necessary to stop the gradient of sigma prediction branches, an article in 2022 proposed to fill the gap between Gaussian distribution and Dirac delta distribution by using IoU as a power to solve the problem that Gaussian distribution and Dirac delta distribution cannot be completely equivalent. In this case, does the gradient transmitted by sigma loss also need to be stopped?
The text was updated successfully, but these errors were encountered: