Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Report differences between two runs of the same task #15

Open
mthuurne opened this issue Jun 19, 2019 · 0 comments
Open

Report differences between two runs of the same task #15

mthuurne opened this issue Jun 19, 2019 · 0 comments
Labels
enhancement New feature or request

Comments

@mthuurne
Copy link
Member

In some cases, users are more interested in changes in problems found in tests, rather than the full list of problems. For example, when working on passing a conformance test suite, regressions require more immediate attention than tests that have never passed before. In the case of static code checks, projects might want to keep old warnings enabled that they want to fix eventually, but act on new warnings immediately.

Another reason to want to compare task runs is to debug intermittently failing tests, to try to figure out what could be triggering the failure. In this case it would be useful to compare logging and time stamps in particular.

We could make an easy way to see the differences in mid-level data between two runs. The advantage of this is that it automatically works for every framework for which the user has written an extractor. The disadvantage is that in some cases framework specific knowledge may be required to make a useful comparison. For example, not only the number of failing conformance tests is important, but also which tests: if one test that used to fail now passes and one test that used to pass now fails, the net total is zero, but it is an important change.

This feature could be integrated with the dashboard (see #14), by showing not just the latest executions, but also the delta compared to the previous one.

Another approach would be to pass a locator to information from the previous run to the wrapper. The locator could be the URL of the report or it could be an output product from the previous run which is used as an input by the current run.

Both approaches complement each other: using mid-level data is a good solution if no framework specific information is needed, while passing report locators is a good solution if a framework specific comparator has to be written.

@mthuurne mthuurne added the enhancement New feature or request label Jun 19, 2019
mthuurne added a commit that referenced this issue Jun 19, 2019
I created issues in the GitHub tracker for the ideas in this document
that were still relevant: #9 #10 #11 #12 #13 #14 #15 #16 #17 #18 #19 #20

We want to create a new roadmap at some point, containing visions for
the future rather than concrete feature plans. But when we do that,
it should not be part of the installed documentation, but kept on
for example a web site or wiki instead.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

1 participant