Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[WIP] adding additional tests for notebooks #117

Open
wants to merge 23 commits into
base: main
Choose a base branch
from

Conversation

kalset1
Copy link

@kalset1 kalset1 commented Jun 16, 2023

Legal Acknowledgement
By contributing to this software project, I agree my contributions are submitted under the BSD license.
I represent I am authorized to make the contributions and grant the license.
If my employer has rights to intellectual property that includes these contributions,
I represent that I have received permission to make contributions and grant the required license on behalf of that employer.

@codecov
Copy link

codecov bot commented Jun 17, 2023

Codecov Report

Merging #117 (7e9d075) into main (dcca13c) will increase coverage by 0.43%.
The diff coverage is n/a.

❗ Current head 7e9d075 differs from pull request most recent head 8a7ff21. Consider uploading reports for the commit 8a7ff21 to get more accurate results

@@            Coverage Diff             @@
##             main     #117      +/-   ##
==========================================
+ Coverage   95.40%   95.84%   +0.43%     
==========================================
  Files          26       26              
  Lines        1371     1371              
  Branches      192      192              
==========================================
+ Hits         1308     1314       +6     
+ Misses         34       30       -4     
+ Partials       29       27       -2     

see 1 file with indirect coverage changes

@kalset1
Copy link
Author

kalset1 commented Jun 22, 2023

mnist_convolutional and neural_network_formulations notebook have empty code cells at the bottom which cause errors

Copy link
Collaborator

@carldlaird carldlaird left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I added some broad comments. Let's do another pass and review again.


# return testbook for given notebook
def open_book(folder, notebook_fname, **kwargs):
execute = kwargs.get("execute", True)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can you use named arguments instead of **kwargs in these methods? Actual arguments provide better documentation for someone using the functions.

with testbook(notebook_fname, timeout=300, execute=True) as tb:
assert tb.code_cells_executed == n_cells
os.chdir(cwd)
book = testbook(notebook_fname, execute=execute, timeout=300)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do we know if this timeout is sufficiently large (but not too large)?



# checks for correct type and number of layers in a model
def check_layers(tb, activations, network):
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It might be better if the function names match more what the function does (rather than what it is used for). This function actually injects code into the notebook. Maybe "inject_activation_check"



# counting number of cells
def cell_counter(notebook_fname, **kwargs):
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Similar thought here as above. How about "get_cell_count"

# gets model stats for mnist notebooks
def mnist_stats(tb, fname):
total_cells = cell_counter(fname)
tb.inject("test(model, test_loader)")
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What is this line doing? Can we add some more comments that describe what these (and other) lines are doing?



# neural network formulation notebook helper
def neural_network_checker(tb, ref_string, val1, val2, tolerance):
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Same comment here. We need some comments to help understand what these are doing.


# check loss of model
model_loss = tb.ref("nn.evaluate(x, y)")
assert model_loss == pytest.approx(0.000389626, abs=0.00031)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The model loss will not be consistent enough to check this way. I think we should just check if it is at least as small as some acceptable tolerance - just so we know if the network begins training more poorly for some reason.

assert model_loss == pytest.approx(0.00024207, abs=0.00021)

# check layers of model
layers = ["sigmoid", "sigmoid", "sigmoid", "sigmoid", "linear"]
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think these check layers are needed. More important that the numbers from OMLT are still good.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes I think that makes sense. Do you think checking the layers of the imported models (the ones that are built first with pytorch and then converted using "load_onnx_neural_network_with_bounds") still makes sense? Or is that not needed

@kalset1 kalset1 requested a review from carldlaird July 27, 2023 16:53
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants