-
Notifications
You must be signed in to change notification settings - Fork 47
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Gradient check example #1497
base: develop
Are you sure you want to change the base?
Gradient check example #1497
Conversation
Codecov ReportAll modified and coverable lines are covered by tests ✅
❗ Your organization needs to install the Codecov GitHub app to enable full functionality. Additional details and impacted files@@ Coverage Diff @@
## develop #1497 +/- ##
===========================================
- Coverage 82.94% 82.90% -0.05%
===========================================
Files 163 163
Lines 13790 13790
===========================================
- Hits 11438 11432 -6
- Misses 2352 2358 +6 ☔ View full report in Codecov by Sentry. |
Fixed two small typos.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks. I think I would just skip over check_grad
and directly introduce check_grad_multi_eps
. The latter performs much better.
Co-authored by: Daniel Weindl [email protected]
I agree that |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good to me, thanks for this.
kwargs = { | ||
"x_free": self.amici_object_builder.petab_problem.x_free_indices | ||
} | ||
return super().check_gradients_match_finite_differences( | ||
*args, x=x, x_free=x_free, **kwargs | ||
*args, x=x, **kwargs |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is there a specific reason for those changes? I would feel like keeping an argument explicit always gives more info.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This was necessary to fix the error described in #1494
"source": [ | ||
"# Gradient checks\n", | ||
"\n", | ||
"It is best practice to do gradient checks before and after gradient-based optimization.\n", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think it would be good to include some rationale for why to check it afterwards and what to look for. I.e. except for parameters with active bounds, the values should be close to 0. At the same time, this might make it difficult to get good FD approximations.
} | ||
}, | ||
"source": [ | ||
"#### Set up an example problem\n", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think the heading levels aren't consistent yet (https://pypesto--1497.org.readthedocs.build/en/1497/example/gradient_check.html)
"source": [ | ||
"#### Set up an example problem\n", | ||
"\n", | ||
"Create the pypesto problem and a random vector of parameter values." |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The latter only happens in the next section. Keep text/code coherent.
"Explanation of the gradient check result columns:\n", | ||
"- `grad`: Objective gradient\n", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Add blank line before listing for proper list rendering (see https://pypesto--1497.org.readthedocs.build/en/1497/example/gradient_check.html)
"### How to \"fix\" my gradients?\n", | ||
"\n", | ||
"- Find suitable simulation tolerances.\n", | ||
"- Consider switching from adjoint to forward sensitivities, which tend to be more robust.\n", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I am not sure whether to include that ASA/FSA point. Generally, ASA should be able to provide accurate gradients too.
"cell_type": "markdown", | ||
"metadata": {}, | ||
"source": [ | ||
"### How to \"fix\" my gradients?\n", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Maybe it should be distinguished here whether one brought one's own gradient function or whether one is using the petab-amici-pipeline. Most points only apply to the latter.
"- Find suitable simulation tolerances.\n", | ||
"- Consider switching from adjoint to forward sensitivities, which tend to be more robust.\n", | ||
"- Check the simulation logs for Warnings and Errors.\n", | ||
"- Ensure that the model is correct.\n", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I guess, the question is not so much about the model. With a wrong model, we should still get accurate gradients. It's more about the correctness of the gradient computation.
AmiciObjective.check_gradients_match_finite_differences
#1494I'd be happy about suggestions on the "Best practices" and "How to fix my gradients".