Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Refactor socket and stream tests #6516

Open
surovic opened this issue Nov 22, 2022 · 2 comments
Open

Refactor socket and stream tests #6516

surovic opened this issue Nov 22, 2022 · 2 comments

Comments

@surovic
Copy link
Contributor

surovic commented Nov 22, 2022

This seems like it should be two tests (server, client) with a local fixture to set up the common code. I would use parametrize for tests that share ~90% of the code.

Originally posted by @surovic in #6511 (comment)

@surovic
Copy link
Contributor Author

surovic commented Nov 22, 2022

Maybe this would be a good candidate for a parametrized fixture? something like program_trace, but local for this test file.

Originally posted by @surovic in #6511 (comment)

@hbrodin
Copy link
Collaborator

hbrodin commented Nov 28, 2022

@surovic I've been thinking about how to refactor this for a while now. I'm unable to come up with something I really like.

For the stdin program trace you need both the program_trace, stdin_data and method to be able to do the evaluation. As I attempted fixtures to solve this, I was only able to grow the program without any improvement in readability IMO.

Regarding the socket read/write tests, my understanding is that we should ideally reach two separate test-cases. When looking at it, I can see us creating two separate fixtures (instrumented_server_run/instrumented_client_run) that provides the program trace and output. But as we need the send_data for evaluation we would need to create a marker/fixture to access that as well. That would allow the send_data to be a marker on each of the test cases.

Once the fixture-functions have executed and we have the respective program trace we would do exactly the same evaluation for both test cases. That should probably go into a separate function, reducing the test functions to just invoking the evaluation-function and having different fixtures as arguments.

From my point of view, all of the above solutions I've been able to come up with adds unnecessary complexity. It might be more 'pure' from a pytest perspective, but I don't think it would make the test cases more comprehendible.

Please, if you see any good solutions - do feel free to suggest them. I might be approaching the problem from a bad angle. I'm all for improving readability and ease of comprehending the code we produce. Although, for this particular case I've not been able to come up with anything good.

If you (or anyone else) don't have any concrete ideas I suggest we close this issue.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants