-
Notifications
You must be signed in to change notification settings - Fork 8
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Evaluate test scoring methods #567
Comments
I'd like to chime in with an opinion on this, taking Anchor Positioning as a great canonical example. I say it's great because a) it has many tests, and it's a sizable feature, b) the Blink implementation is basically complete and shipped, so the correct expectation would be to have a relatively high score, and c) it shows the nuances of the three scoring methods quite well in my opinion. How do the three methods score Anchor Positioning for Chrome?
First, a quick comment that it appears there's a sizable bug in The sizable difference (96% vs. 78%) between interop view and subtest view in this case is roughly down to just one test file which contains 4291 subtests. Note that there are a total of 253 test files, testing all of the many, many aspects of this large feature, and a total of 12674 sub-tests. This one test file, which only tests the parsing behavior for a single CSS function, was written in a very dilligent way - it loops through all permutations of name, property, size, and fallback, and ensures the complete functionality of this one small aspect of parsing. That's awesome, and should be encouraged. However, because this one small part of the much larger overall feature has been exhaustively tested, the Essentially, in the abstract, all three scoring methods sound like they could be plausible ways to tabulate these scores. However, in practice, engineers tend to write WPTs as "file-per-functionality". I.e. they implement a particular part of a feature, and test that part with a new test file. Sometimes, within that test file, they decide to break the test into subtests, and sometimes they're all lumped into a single subtest. Either way, that one file represents the piece of functionality being tested. There are, of course, exceptions to this. But in my experience, it's the most common way to test features. We should therefore be scoring via the same idea: one file tests one part of the feature. Subtests are just that, a sub- part of the test, not a standalone test of something important. The correct choice, given the above, is therefore
|
I was just talking about this with Kadir, and I think the current way things are displayed on the dashboard could have the inadvertent negative impact of causing people to not want to write comprehensive tests with subtests if it made the feature look like it was more poorly implemented than it is. I'm not sure how Thinking about all the partial implementations that exist it seems to me that there will need to be some ability to manually flag subtests as irrelevant. For example in CSS the two value syntax for |
On https://webstatus.dev/ and feature details pages like https://webstatus.dev/features/dialog we show a test score between 0 and 100% based on WPT results.
The current approach is to count passing subtests divided by number of known subtests, the same as the default wpt.fyi view. Let's evaluate how well that works, and compare it to other scoring methods.
Desirable properties:
The options are, along with their wpt.fyi URL query parameter. (Note that the URLs aren't exactly right and include tentative tests, working around web-platform-tests/wpt.fyi#3930 to make comparison possible.)
Passing subtests (
view=subtest
)This method counts all subtests and
Example: 225 / 258 = 87%
Pros:
Cons:
Partially passing tests (
view=interop
)Example: 105.12 / 109 = 96%
Pros:
Cons:
view=subtest
.)view=interop
would likely cause confusion, as the view is named for the Interop project. (Renaming/aliasing the URL query parameter would address this.)Fully passing tests (
view=test
)Example: 102 / 109 = 94%
Pros:
Cons:
Next steps
Evaluate how well each method corresponds with feature completeness/quality, by taking a random sample of features and listing what the scores would be. Things to consider:
cc @gsnedders @jgraham since we have discussed test scoring many times over the years, most recently in web-platform-tests/rfcs#190.
The text was updated successfully, but these errors were encountered: