-
Notifications
You must be signed in to change notification settings - Fork 5
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
refactor: java cache and topics integration tests should use random cache names #379
Conversation
Refactors the cache and topics integration tests to use a random cache name instead of reading from the environment. This follows the JavaScript SDK pattern. We have re-written the tests to use a base test class that sets up the client. Test classes are responsible for setting up and tearing down the cache (if any). Because a test class now re-uses the same cache, we have refactored the tests to use random keys in each test case. These two changes have fixed race conditions that have caused consistent test failures when running locally. Lastly the auth tests were not closing clients in the tests. Because of this, warning messages appeared in the logs consistently. We have fixed these problems by closing clients between reuse.
Previously we created the test cache in each of the test classes. This was unnecessary and brings about more opportunity for developer error when writing new test classes. We lift that setup and teardown to the base class.
@@ -160,14 +147,17 @@ void testBatchGetWithStringKeys_RequestsMoreThanMaxConn() { | |||
.build(); | |||
|
|||
// test data with more keys than max concurrent requests | |||
Vector<String> keys = new Vector<>(); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Vector<String> keys = new Vector<>(); | |
final List<String> keys = new ArrayList<>(); |
Is there a specific reason to use a Vector? I'm not sure I've ever seen one of those in use.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That's what I used in my intro to Java class 22 years ago. I can use ArrayList
lol.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Updated in 4eb3b71
if (System.getenv("TEST_CACHE_NAME") == null) { | ||
throw new IllegalArgumentException( | ||
"Integration tests require TEST_CACHE_NAME env var; see README for more details."); | ||
credentialProvider = CredentialProvider.fromEnvVar("TEST_AUTH_TOKEN"); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is it worth changing TEST_AUTH_TOKEN to MOMENTO_API_KEY in this pr? I thought we were slowly changing to that.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm going to change the CI / env vars in a separate PR. This one is pretty busy already.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good! Just a couple nits and I'll +1. Thanks for going through all those test classes.
Refactors the cache and topics integration tests to use a random cache
name instead of reading from the environment. This follows the
JavaScript SDK pattern.
We have re-written the tests to use a base test class that sets up the
client, assigns a random cache name, creates the cache, and tears it
down at the end of tests.
Because a test class now re-uses the same cache, we have refactored
the tests to use random keys in each test case. These two changes have
fixed race conditions that have caused consistent test failures when
running locally.
Lastly the auth tests were not closing clients in the tests. Because
of this, warning messages appeared in the logs consistently. We have
fixed these problems by closing clients between reuse.