You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Each test node result is saved in the results_bag fixture, and later in test_synthesis the global module_results_[dct/df] fixture is used to collect everything.
This is a bit sad when the objective is just to create results and store them in files (one file per id), after each node.
Providing an easy way to hook after each test run and write results to json or csv files would be much more convenient.
Maybe even a commandline option ?
The text was updated successfully, but these errors were encountered:
If one wants to harvest at the end of each test node, would you get the session from the request then lookup the testid? I reasly like @smarie pytest-cases and pytest-harvest!
Indeed currently one way to do this is for example to create a my_results_dumper fixture:
@fixturedefmy_results_dumper(results_bag, request):
"""A fixture to dump the results after each run, to a distinct file named after the test node id"""# Let the test executeyieldresults_bag# Grab current testitem=request.nodetestnode_id=item.nodeid.split('::')[-1] # get the test_id# Store the test's information in a dictionary
(_, _), status_dct=get_pytest_status(item, durations_in_ms=False, current_request=request)
result=dict(test_id=testnode_id, status=status_dct.get("call")[0], duration_ms=status_dct.get("call")[1])
# Add all of the `results_bag` in this dictionaryresult.update(results_bag)
# Finally dump as json filedest_folder=Path(".results")
dest_folder.mkdir(exist_ok=True, parents=True)
filename=str(folder) +"/%s.json"%testnode_idjson.dump(result, filename, indent=4, default=str, sort_keys=True)
Something like this.
I am wondering if we could make it easier... not sure as this is pretty straightforward.
Let me know if you find more elegant ways
In the context of benchmarks such as the example in https://smarie.github.io/pytest-patterns/examples/data_science_benchmark/
Each test node result is saved in the
results_bag
fixture, and later intest_synthesis
the globalmodule_results_[dct/df]
fixture is used to collect everything.This is a bit sad when the objective is just to create results and store them in files (one file per id), after each node.
Providing an easy way to hook after each test run and write results to json or csv files would be much more convenient.
Maybe even a commandline option ?
The text was updated successfully, but these errors were encountered: