-
Notifications
You must be signed in to change notification settings - Fork 96
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Init tests #369
Init tests #369
Conversation
1f07bb9
to
a76a670
Compare
We get a warning during the tests. I'm not yet sure what's the best way to fix that. It should be ok for now |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think we are going a direction that may be more brittle and targeted than we need for a small site like this.
- We should mostly focus on the integration tests and allow the app to use both postgres and redis (for now). The
FakeRedis
implementation scares me and adds maintenance if/when we expand redis usage, which doesn't seem too valuable to me. - Let's pull out the fetch logic from
.web.tasks
from the processing logic, that allows easy mocking to avoid needing real data but also avoid changing function signatures strictly for tests. - We can document how to run tests utilizing the provided
docker-compose.yml
instead of adding a mostly duplicate one.
A follow up for me (or whoever wants to do it): Let's model the remote objects instead of passing dicts around.
docker-compose.test.yml
Outdated
@@ -0,0 +1,15 @@ | |||
version: "3.4" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
These dependencies are already met by docker-compose.yml
. Can you expand on why we need a dedicated test one?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
docker-compose.yml
bring dependencies + website up for testing and only open 8000 to show the website. docker-compose.test.yml
goal is to only spin up docker + redis so you have a local instances the tests can connect to.
There may be a better way to do that
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The way I would probably run it is:
docker-compose run web python -m pytest tests/
It may be worth throwing in a makefile to simplify commands (see #374)
pyslackersweb/website/models.py
Outdated
class InviteSchema(Schema): | ||
email = fields.Email(required=True) | ||
agree_tos = fields.Boolean(required=True) | ||
agree_tos = fields.Boolean(required=True, validate=validate_true) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This could be simplified to: validate=bool
since it's a callable
tests/test_website.py
Outdated
|
||
|
||
@pytest.fixture | ||
async def website_client(app: aiohttp.web.Application, aiohttp_client, slack_client, redis): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
To avoid confusion with the routing (as in L22), why not make a base test_client
for the top-level app, so all the HTTP calls are as they would look in the browser. We can still have the website_client
customize the slack_client
it uses for testing.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't understand what you mean.
L22 check that /
is correctly redirect to /web
as it is in production.
We only have one subapp for the moment so we could have one app
fixture doing everything but once we have multiple subapps we may need to split the fixture anyways.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The confusion I mean is that it's odd that the "website app client" can GET /
since that is a route on the main "app" and from my view it shouldn't be reachable.
I agree with breaking them up, I would expect us to have a app_client
, web_client
and *_client
(sirbot or whatever else we add).
But I can refactor some ideas after we merge it in. I typically namespace the tests to match the packages under test.
|
||
assert result | ||
|
||
for record in caplog.records: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I really do not want to test logging implementation, that seems brittle and not like something to ensure doesn't change.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The goal is not to check the logging implementation but to ensure not exceptions where raised. It kinds of loop back to the return
/ raise
issue of the tasks.
IMO it's fine for now.
Later on, we should split those tasks into two functions. One handling the logging / exception catching for apscheduler and a second doing the actual work that we can test
|
440420b
to
ea0af01
Compare
@mrasband updated, please take another look |
No description provided.