-
Notifications
You must be signed in to change notification settings - Fork 516
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
The generative loss in implementation #76
Comments
It's an optimization:
|
Just to clarify, the purpose of the constant "scaling penalty" |
Giving that a normalizing flow gives you a correct log-likelihood of your data under your model it would be a shame to omit |
Thank you for the explanation! |
In the paper, the objective function to minimize is
However in the code, objective first add this constant c, logpz, and then apply a negative sign to the objective to get generate loss
glow/model.py
Line 172 in eaff217
glow/model.py
Line 181 in eaff217
glow/model.py
Line 184 in eaff217
It seems to minimize -logpx+Mlog(a), not the loss writed in paper which is -logpx-Mlog(a)
Do you ignore the constant because it will not affect the training or I missed something in the code?
The text was updated successfully, but these errors were encountered: