-
-
Notifications
You must be signed in to change notification settings - Fork 7
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat: Add testing principles #112
Conversation
a0930a4
to
d90eee3
Compare
Inspiration for these comes from:
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Overall this looks great. Maybe you could add a section on when to and when not to use snapshot tests, cause there may be some confusion around that.
I would love that! I felt that maybe this wasn't the place for it though? I will consider where that guidance would fit better. It didn't seem to fit here because these principles are meant to be a bit higher level. i.e. we can derive a rule about snapshot tests being bad from these principles (because they don't describe expectations, because they're not readable, etc.) Edit: I see that we have some guidance on snapshots already: https://github.com/MetaMask/contributor-docs/blob/main/docs/testing/unit-testing.md#snapshots But it falls well short of what I'd want us to have. We can improve it. |
|
||
Some behaviors are inherently more complex than others, increasing the difficulty of testing them. But in general we want tests to be easy to write, so that we can keep costs low. | ||
|
||
If a test is hard to write, consider that as a warning that you might be approaching the problem in a non-optimal way. Resist the urge to power through the work, and instead consider how redesigning the code-under-test or using a different testing strategy might save time. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
it's sometime hard. Refactoring a 1000 lines component that is impossible to test, when there's no test at all to check the redisign is fine, is a very complicated task that should be done in it's owm PR and not as part of something else.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In these cases, I'd recommend protecting against major regressions using end-to-end tests and/or integration tests (or even manual testing, if it's the only option). If the test is sufficiently distant from the code, it will be easier to refactor/redesign. This can give you some safety while you work to make the code easier to test comprehensively.
I considered adding guidance like this here, but it felt a bit out of place. I was trying to keep these principles minimal, with specific guidance elsewhere. I am not sure where this advice would best fit though 🤔
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Maybe I can include this in the last principle, where it talks about deleting tests / not writing bad tests. I can have a sentence at the end about this.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I wasn't able to find a place for this. I'll look elsewhere for where this advice can live, in a later PR.
|
||
Test breakages are expensive. We should do what we can to minimize them. | ||
|
||
### 4. Tests should be easy to read |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Plus test should also be usable to understand the feature under test. Test is doc.
Don't forget to link to GH issues in the test comments!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Good advice! The first part about understanding the feature under test is hopefully covered by the first principle. The point about linking GitHub issues is a great practice, but feels too in-the-weeds for this section of the document.
|
||
### 2. Tests should catch bugs | ||
|
||
This is the main purpose of tests. Take care to ensure that each test actually fails when the described expectation is not met. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
They should catch bugs that we don't have yet, or that we think we can have in the future, but they should also catch bug we had and fixed. Each fixed bug should have one or more associated non regression test and they should refer to the issue with a link in code comments.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Great policy, though this feels like it would belong in a process document somewhere. Not sure exactly where it fits 🤔
A section has been added to the "Testing Overview" document to describe our testing principles. These establish what the goals of our automated test suite are, and help contributors differentiate between good and bad tests.
…not just poorly designed tests
e293795
to
ddd7ad7
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Really love the outlined test principles.. they are high level enough, allowing us to follow them regardless of the type of test that we are implementing, but at the same time if the principles are followed, it is highly probable that we end up writing a good test (again regardless if we are writing a unit/integration/e2e test).
🚀 🚀 🚀
docs/testing/overview.md
Outdated
|
||
That doesn't mean we should skip writing tests with it's too difficult. On the contrary; we have high expectations for quality, we don't have the luxury of writing only the easiest most valuable tests. We need to aim for total coverage. | ||
|
||
But writing and maintaining bad tests is not the shortest path to complete test coverage. Our time is better spent writing good tests, and making investments that allow us to write more good tests. All code can be designed to be easy to test, and even the most complex test setup/teardown can be reasonably fast and easy to read/write with the right tools. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is such an important statement 👏
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
great addition @Gudahtt 🙌
Overall LGTM. I was thinking on another principle: Tests should be deterministic
That's somehow implicit from the existing sections already, but do you think it might deserve its own point?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Not sure as well if the following could be as part of the 1st, but we need to let it more clear somewhere:
"Do not test implementation details" -> users do not care about it, and neither should the tests.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Deterministic is a good suggestion! But I hesitate to add it here for two reasons:
-
The "tests should rarely break" principle already mostly captures this. The main reason a non-deterministic test is harmful is that it causes test breakages.
-
Some tests might intentionally be non-deterministic!
Ideally our code is deterministic, but if we were to have some non-deterministic behaviour, the test might also be non-deterministic. We of course make every effort to reduce unpredictability in our tests, but it's common practice to have some tests that are "more realistic but less reliable" that run nightly or just on the main branch, which makes a different tradeoff between "catching bugs" and "reducing test breakages" (i.e. they break more often but catch some bugs not caught elsewhere).
This strategy should be approached with caution I think, as it is a potential time-sink. But it can be a reasonable tradeoff to make. This is why the principle "Tests should rarely break" was worded as intentionally "fuzzy" rather than being stated as an absolute "tests should never break in the absence of a behavior/API change or a bug".
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
"Do not test implementation details" -> users do not care about it, and neither should the tests.
This is a good suggestion 🤔
I have been thinking about re-wording the introduction where the goal is stated ("complete test coverage" is ambiguous and would mean different things to different people, and isn't the best representation of our goal anyway). I mention this because this suggestion gets at the goal ("what users care about") in a way that isn't clear in the document yet.
Taken more generally, this might be an inverse of the "tests should catch bugs" point. e.g. "Do not write tests that do not catch bugs". I will consider adding this clarification there.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
it make sense to me, Mark, thank you for your reply 🙏
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I wasn't able to find a nice way of fitting in a re-wording of the introduction, or the advice about not testing implementation details. I'll try re-wording the introduction in a later PR.
When I tried fitting in "Don't test implementation details" part, it felt a little too prescriptive to be in this section because it could be read as disallowing mockist/London-school/inside-out testing strategies (which certainly can be effective, though they're certainly not my preference). From a certain point of view, what might be called "implementation details" by one person could be "the contract this code is meant to fulfill / the best representation of this code's responsibility" by another.
I do plan on updating the later parts of this document with a more explicit recommendation to prefer less mocking (e.g. a recommendation against mockist/inside-out testing), because I don't think it's practical to apply that strategy to our legacy codebase. But for teams that want to use that strategy for their own isolated modules, it feels a bit heavy-handed to disallow it completely, or to recommend against it as a general principle.
A section has been added to the "Testing Overview" document to describe our testing principles. These establish what the goals of our automated test suite are, and help contributors differentiate between good and bad tests.
Rendered