Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Build twenty-twenty for GLTF #30

Open
adamchalmers opened this issue Aug 21, 2024 · 2 comments
Open

Build twenty-twenty for GLTF #30

adamchalmers opened this issue Aug 21, 2024 · 2 comments

Comments

@adamchalmers
Copy link
Contributor

adamchalmers commented Aug 21, 2024

We use the 20/20 crate to do visual regression testing on our frontend, our CLI, and our backend. 20/20 works by comparing two PNGs or two h264 frames, and checking their visual similarity is above some configurable threshold (e.g. 99% similar).

This crate works very well, especially because when it detects a change, it's easy for a programmer to visually compare the images and see what's gone wrong. But it has some problems:

  • False positives (e.g. the anti-aliasing settings of the grid change, causing every pixel along the grid to be slightly different, dropping the similarity threshold below 99%)
  • False negatives (e.g. a shape has lots of complicated internal geometry which is hidden from the camera by an occluding face, and when it changes, the camera cannot see it)

We have many concrete examples of the former (everyone has a story about "oh the engine changed rendering slightly, someone needs to update all the frontend snapshots"). An example of the latter just happened, where @benjamaan476 wants to test an engine change which adds internal edges to a model. Because the edges are internal, they can't be seen by the camera and 20/20 is useless. But the edges matter for future applications, and their existence should be tested.

Proposed solution
20/20 checks our model geometry, rendered visually. I think we need a second library for checking model geometry, rendered textually. We could export the model in some text format (e.g. our GLTF) and check it against the expected GLTF.

This approach has a few problems:

  • There might be many possible ways to describe the same geometry with different GLTFs
    • A trivial example is outputting a degree of either 0 or -0, this is a change in text with no actual meaningful difference in geometry
    • Other trivial example: unordered maps/sets being output to text in different orders
    • There might be many ways to describe the same geometry, e.g. whether to represent a straight line as Curve::Polynomial{degree: 1} or Curve::StraightLine
    • We should design the assert_gltf function to understand these equivalences and not worry about them, just like doing assert_eq on two HashMaps in Rust doesn't care about the ordering of items.
  • It's not really clear how to understand the difference between two complicated GLTF files, they're not as intuitive as looking at two pictures.
    • So I think this would be a complement to 20/20, not a replacement.

We could call the library "xray_specs" because they let you see internal geometry that not even 20/20 vision can :)

@lf94
Copy link

lf94 commented Aug 21, 2024

100%. I've always asserted that any visual check of our models should be of the actual rendered geometry due to various factors. I would never compare the textual representations as there are unimaginable ways to represent the same model.

Unfortunately I don't believe this is a task to focus on at the moment. We have higher priority items like making modeling-app able to pump KCL to the backend even faster than we do now.

A short term solution would be to use better computer vision techniques, such as not basing our comparisons on raw pixel colors, but instead their "color distance" https://en.wikipedia.org/wiki/Color_difference (preferably something from the CIE color space), and downscaling to reduce comparing minute details (which is what neural networks do). We can get "half way there" for the internal geometry models with transparency.

@adamchalmers
Copy link
Contributor Author

adamchalmers commented Aug 21, 2024

Yeah, not saying we need to do it now, clearly 20/20 works well enough for most cases. But I wanted to write it down for the future. I really like the idea of making 2020 smarter with CV! And having an option to set models to transparent, so you can see the internal edges, would be really helpful too.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants