Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Run integration tests against Artifactory #6

Open
briantist opened this issue Jun 30, 2022 · 2 comments
Open

Run integration tests against Artifactory #6

briantist opened this issue Jun 30, 2022 · 2 comments

Comments

@briantist
Copy link
Owner

I started implementing the necessary stuff in pytest to launch an artifactory-oss container and get it set up so we can run integration tests against it.

I was hamstrung in a number of ways, since jfrog makes a point to put basic features behind a paywall and require pro or enterprise licensing.

For one thing, we need to first address #5 since that is used in basically every API call.

But we have other issues too: it turns out the APIs to create a repository are also behind paid licenses... so we actually cannot programmatically set up our test environment with a free license. They expect you to manually connect to the web UI and click through to create it.

This is really demoralizing. I am trying to think through ways of creating a test image container or something, with the repository already set up. It's going to suck, because every time we want to change something, at least part of the process will have to be done manually.

I'm also worried that I'll keep uncovering more and more things as we go along that make it difficult or impossible to get a good test environment.

But I feel like I have to try because I don't like the current situation and the lack of tests.

briantist added a commit that referenced this issue Jul 3, 2022
@briantist
Copy link
Owner Author

I did some research on this today. The OSS container comes with a repository already created called example-repo-local and it is the correct type (a "generic" repository) but in my past attempts as using it, the result when trying to "find" it was always None and so the code would try to create it, and run into the license issue.

So it turns out the find_repository/find_repository_local code in dohq-artifactory also uses APIs that require a pro license, but those methods expect 400 or 404 HTTP errors when the repository you're looking for doesn't exist, and so it swallows those. But the error code for an API that requires a license is also a 400 and so these methods just act like the repository doesn't exist.

Reported here:

So I've been able to work around that case by using get_repositories() in ce0f93c and with that, if I resign to using only the existing example repository for all tests, I might be able to avoid the annoying situation of needing manual steps to configure a repository for a test environment, and can just use the container as is 🤞. It remains to be seen how much other difficulty I'll run into.

Also I'm thinking, for the purposes of wanting test collections to be in the repository already, having "populated data" would be ideal for most of it, otherwise I have to populate it at runtime, and since I don't want to commit tarballs, that also probably means committing actual collections and building them at runtime to upload... I could see cases where that's desirable and others where it isn't.

So we'll see, but I suspect at some point it's going to become evident we need some persistent data that isn't going to be so easy to put in place at runtime.


As far as that plan of how to build such an environment, I have an idea that I experimented with a little bit, that I think could work.

The Artifactory container defines a VOLUME in the build for the data stored in the instance. This is actually really annoying because it means I can't just make a new Dockerfile that uses theirs in a FROM and write data and commit it. Because the data I need stored is always stored in a volume.

So I came up with something that seems viable.

First, I created a volume, and mounted it into the Artifactory container. Started it up, made the manual changes I needed, shut down the container.

Then I mounted the same volume in a plain old alpine container, tar'd the contents of the volume to a "local" (in container) file, and committed the change.

This gives me a runnable container that has a TAR of all the data I need. alpine is only ~ 5MB and already comes with tar so the bulk of the "image" really is just the data, very small.

The idea is that I could push this image to this repository's registry as a data image.

When wanting to run Artifactory for testing, the steps would roughly be in reverse:

  1. Create (new, empty) volume
  2. Pull alpine "data" image
  3. Run with the volume mounted, untar the local file into the volume
  4. Start the Artifactory container with the (now populated) volume mounted

This was tested and works, and I think all the steps can be automated nicely with docker compose, but I didn't try that part out yet because based on the above, I might not need this at all, we'll see.

The way I envision the compose file working out is to use depends_on with condition: service_completed_successfully.


So there'd be two compose configurations: one for running the artifactory test environment and one for preparing the data image.

The data prep config should do things in this order:

  1. Create the volume (maybe an optional separate way of doing this starts with the current state of the data volume?)
  2. Start Artifactory with the volume mounted
  3. The alpine image is also specified as a service, set to run the tar command to copy the data into a local tar, but it's set to depend on artifactory finishing, so it shouldn't start until artifactory is shut down

So we'd start this config (this has to be done interactively!), we admin configure whatever we need to in Artifactory through the web UI, when we're done, we shut it down. When the artifactory container is done, the archiver service starts and does the tar thing.

Next step (could be another container with dependency, or a wrapping shell script, or manual) is to docker commit the data image and push it up.


Now for running Artifactory as a test target, the steps are similar:

  1. Create volume
  2. Run data image with volume mounted, command is set to untar into the volume and exit
  3. Artifactory service is set to depend on the data image (service_completed_successfully) so it doesn't start until that untar is done

@briantist
Copy link
Owner Author

Test-only APIs that need a Pro license

Production-used APIs that need a Pro license

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant