Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Implement framework to invoke/exercise UI modules in testing environment #232

Open
brickbots opened this issue Sep 7, 2024 · 3 comments
Labels
enhancement New feature or request help wanted Extra attention is needed

Comments

@brickbots
Copy link
Owner

  • Set up a display type that works without hardware or pygame. I believe the Luma module already has something like this
  • Develop a fixture that spins up or mocks all of the items required to invoke a UIModule
    • display_class - the mock/stub display class
    • camera_image - A pil image which can be set to solvable/unsolvable during testing
    • shared_state - A mock (saved?) shared state where we can set solution/gps info for testing
    • command_queues - This is a dictionary of queue objects, should be easy to mock
    • config_object - Probably just use the actual config object, but could be easily mocked
    • catalogs - Probably best to use the actual catalogs object
    • add_to_stack - Just a function that returns
    • remove_from_stack - Just a function that returns
  • Add a flatten_menu_items function to menu_manager to traverse the whole tree and return a flat list of menu_item_defininitions
  • Develop a UIModule exerciser...
    • Adjust state in various ways (non solvable image, gps /no gps, ???)
    • Call .activate of module
    • Call all key related methods, with update after each
    • Adjust state and repeat

Once we have all of these pieces, then the test becomes

  • Set up the fixture
  • Get a list of menu_items to test
  • Create a class instance for each menu_item
  • Pass the class instance to the exerciser

We can add this as the first integration test, but if it's takes too long we can make a new 'slow' category of tests. In any case, we can then run this on either all merges to main, or perhaps PR's or only PCs

@brickbots brickbots added enhancement New feature or request help wanted Extra attention is needed labels Sep 7, 2024
@brickbots
Copy link
Owner Author

@mrosseel
Copy link
Collaborator

mrosseel commented Sep 9, 2024

In general I like this plan, however we should refrain from doing something special for testing, and just make the app more flexible in where the config comes from. Like the queues.
I guess my main point is: let's use this exercise to not only enable testing, but improve the code while we're at it.
One way to check this is if the changes will bring us closer to full replay capability. Not something we need right now, but I feel that would be the correct general direction.

Also not sure why the stack methods shouldn't do anything?

@brickbots
Copy link
Owner Author

I should have been a bit more clear about the goal/type of testing I'm thinking of here... especially as I dropped this in a PR dealing with a different type of testing 😅 The key part of this for PR #231 is the dummy display which would allow us to do automated testing based off of keyboard scripts which can't be done right now.

Monkey Scripts
So sending keys via the keyboard buffer (monkey scripts as @mrosseel likes to accurately call them) should continue to be a key part of our overall testing. This is a sort of last-line fully integrated system testing.... just trying to find things that will crash the system. It doesn't test for expected behavior (but maybe could! See below).

Ideally this is run on actual hardware... but making changes required to run these off the actual hardware is a good goal here and I think the main thing is the display driver. If we had that, we could do some about of CI testing using this method.

This is also where I think generating a script that understands the menu system would be good thing. We can generate the right keypress into the script to visit each and every menu and send a selection of keys to each in turn, avoiding certain keys that we know will cancel or invalidate the testing.

This sort of testing will be somewhat slow, as we do need to pause between each keystroke so it limits the keystroke per minute and could result in long test times.

Replay / Expected Behavior Testing
This one is interesting and I've not thought this totally through, but it should be possible to record an observing session and play it back for testing. We'd have to record the initial state, specific timestamps of keys pressed, the imu data stream, gps messages, every image grabbed from the camera along with the shared state at some interval.

We could they play back the inputs (imu/keypresses/camera captures/gps) and compare the resulting state against the recorded state.

This would be SLOW as it would have to run in real time... but it's an interesting idea.

UI Module Unit Testing
This is what this issue is concerned with and a testing fixture is absolutely required for this. As a first step, just exercising all the keys is helpful, but this can be expanded to include expected behavior testing for each module once the basics of running a UIModule in the test fixture is worked out.

Since this is just testing a single UIModule simply returning from the add/remove from stack functions keeps the UIModule test going. A different test fixture would be required for testing the menu_manager, and we should get one of those going as well!

This sort of unit test can run very fast, so it's possible to do this much more frequently.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request help wanted Extra attention is needed
Projects
None yet
Development

No branches or pull requests

2 participants