You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
In some cases, users are more interested in changes in problems found in tests, rather than the full list of problems. For example, when working on passing a conformance test suite, regressions require more immediate attention than tests that have never passed before. In the case of static code checks, projects might want to keep old warnings enabled that they want to fix eventually, but act on new warnings immediately.
Another reason to want to compare task runs is to debug intermittently failing tests, to try to figure out what could be triggering the failure. In this case it would be useful to compare logging and time stamps in particular.
We could make an easy way to see the differences in mid-level data between two runs. The advantage of this is that it automatically works for every framework for which the user has written an extractor. The disadvantage is that in some cases framework specific knowledge may be required to make a useful comparison. For example, not only the number of failing conformance tests is important, but also which tests: if one test that used to fail now passes and one test that used to pass now fails, the net total is zero, but it is an important change.
This feature could be integrated with the dashboard (see #14), by showing not just the latest executions, but also the delta compared to the previous one.
Another approach would be to pass a locator to information from the previous run to the wrapper. The locator could be the URL of the report or it could be an output product from the previous run which is used as an input by the current run.
Both approaches complement each other: using mid-level data is a good solution if no framework specific information is needed, while passing report locators is a good solution if a framework specific comparator has to be written.
The text was updated successfully, but these errors were encountered:
I created issues in the GitHub tracker for the ideas in this document
that were still relevant: #9#10#11#12#13#14#15#16#17#18#19#20
We want to create a new roadmap at some point, containing visions for
the future rather than concrete feature plans. But when we do that,
it should not be part of the installed documentation, but kept on
for example a web site or wiki instead.
In some cases, users are more interested in changes in problems found in tests, rather than the full list of problems. For example, when working on passing a conformance test suite, regressions require more immediate attention than tests that have never passed before. In the case of static code checks, projects might want to keep old warnings enabled that they want to fix eventually, but act on new warnings immediately.
Another reason to want to compare task runs is to debug intermittently failing tests, to try to figure out what could be triggering the failure. In this case it would be useful to compare logging and time stamps in particular.
We could make an easy way to see the differences in mid-level data between two runs. The advantage of this is that it automatically works for every framework for which the user has written an extractor. The disadvantage is that in some cases framework specific knowledge may be required to make a useful comparison. For example, not only the number of failing conformance tests is important, but also which tests: if one test that used to fail now passes and one test that used to pass now fails, the net total is zero, but it is an important change.
This feature could be integrated with the dashboard (see #14), by showing not just the latest executions, but also the delta compared to the previous one.
Another approach would be to pass a locator to information from the previous run to the wrapper. The locator could be the URL of the report or it could be an output product from the previous run which is used as an input by the current run.
Both approaches complement each other: using mid-level data is a good solution if no framework specific information is needed, while passing report locators is a good solution if a framework specific comparator has to be written.
The text was updated successfully, but these errors were encountered: