Replies: 1 comment
-
UPD: #945 is going to help with enforcing the use of pytest over unittest. If one were to send PRs updating the ignored modules, they'll need to drop the ignore-comments from the top. |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
We've switched over the test runner to
pytest
and there are a few teststhat already use it natively too. Still, the transition isn't over yet, it's an ongoing effort and we ask for the help of the community to get this over with faster.
Help wanted
Here's how you can help:
unittest
. Follow,pytest
conventions, use fixtures. Use test functions, classes are unnecessary here.Use
@pytest.mark.parametrize
to have the same test body run against multiple inputs: each one of them will show up in the report separately. Useassert
statement, don't inherit fromunittest.TestCase
. Use built-intmp_path
fixture (nottmpdir
— it's deprecated) to work with temporary data on disk and make sure that multiple tests don't hit the resources at the same time.pytest
test function. It should be failing. Cover as many edge and negative cases as you can.Send a PR demonstrating the red CI. After that, add one separate commit on top of that in the PR branch adding
@pytest.mark.xfail
as per https://pganssle-talks.github.io/xfail-lightning. Useraises=
arg if it's not the assertion that's failing. Follow the rules from (1).unittest
-based modules topytest
. Follow the rules from (1).pytest
-based test module examplehttps://github.com/ansible/ansible-lint/blob/065faa5/test/TestIncludeMissingFileRule.py
Beta Was this translation helpful? Give feedback.
All reactions