-
Notifications
You must be signed in to change notification settings - Fork 19
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Use 1.0 for good AtomMapping scores, 0.0 for bad AtomMapping scores #529
Conversation
dac53fd
to
ae585dc
Compare
Codecov ReportPatch coverage:
Additional details and impacted files@@ Coverage Diff @@
## main #529 +/- ##
==========================================
- Coverage 92.07% 91.97% -0.10%
==========================================
Files 113 116 +3
Lines 6938 7054 +116
==========================================
+ Hits 6388 6488 +100
- Misses 550 566 +16
☔ View full report in Codecov by Sentry. |
This looks good to me! Should we check whether networks generated with the flipped score match networks generated using the original score? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Just a few nits with docstring style and phrasing.
"""Maximum command substructure rule | ||
|
||
This rule was originally defined as:: | ||
This rule is defined as:: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
👍
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should we check whether networks generated with the flipped score match networks generated using the original score?
Is test_minimal_spanning_network_regression
sufficient here? Ottherwise I agree, we probably should consider a more in-depth check.
0.0 is dead, long live 1.0
Co-authored-by: Josh A. Mitchell <[email protected]>
0aabb67
to
0fbf018
Compare
I'll check with Ben once he's back, but I think the perses scorer already was in this direction, so that doesn't require flipping |
@IAlibay this is the regression test here: https://github.com/OpenFreeEnergy/openfe/blob/main/openfe/tests/setup/test_network_planning.py#L198 It looks like a non-trivial (it's not just radial, it's got plenty of edges) network, so I doubt we'd be getting it through luck if we hadn't correctly done this change |
@richardjgowers am I good to merge here? Would be good to unblock this for @hannahbaumann to test out the newly generated MSTs. |
Previously 0.0 was used for "good" scores and 1.0 for "bad" scores, so networks could be created according to considering the total sum of all weights across a network as the cost function. This created friction with existing and ongoing efforts, where 1.0 is considered perfect "similarity" and 0.0 is bad/no similarity.
This PR flips scores around to align with community practice.
Developers certificate of origin