-
Notifications
You must be signed in to change notification settings - Fork 12
JFROG
Note that for these support tasks elevated permissions are required. This document is not a replacement for the jfrog page where further and more detailed information can be found. This is just a list of issues and tasks we have encountered during our life with JFROG's Artifactory and XRAY meant to hopefully speed you along without needing to go through more documentation than necessary.
Sometimes a user will want a repository added so that they can download packages not available in our current channels. A remote repository serves as a caching proxy for a repository managed at a remote URL.
To add a remote repository; Navigate to the Administration Tab
followed by Repositories
, hit Add Repositories
, and then finally click Remote Repository
From there click the package type you want added then fill in the fields. In my experience most of them were unnecessary, but you can view the additional fields and descriptions here
You will need to navigate to Identity and Access
followed by Permissions
. In there you can edit the Any Remote
permission. This permission is for remote repositories that are generally available to the AAW and other projects that leverage our Artifactory instance.
Just hit the "pen" icon to Edit Repositories
and drag the Available Repository
to the Included Repositories
Hit the New Permission
Option, name the permission and add a repository you want the permission to apply to. Hit the Users
tab to then add any specific users, as well as specific permissions on the repository selected earlier.
The catch here is that since it is a Remote Repository
you will need to add the Deploy/Cache
option in addition to the Read
option as if you only selected Read
, the user will only be able to see and use what is in the cache, as without the Deploy/Cache
option the user cannot make a request to Artifactory to pull a new package from the remote repository.
Normally you can just use the Set Me Up
feature and it will direct you on how to use it
Note: For use inside the AAW production cluster, the URL will not be the same as what is shown in the Set Me Up
. You will need to refer to the internal service url of http://jfrog-platform-artifactory-ha.jfrog-system:8081/
We already have conda-forge, pypi, and cran pre-configured to be used in our images as well.
Depending on the type of repository added, you may want to add some default configuration to our dockerfiles so users can use packages without needing to paste in the full service url. Take for example adding a conda repository, you would want something similar to this. That would allow users to install with conda
without the long url. As of right now this also has to be done for remote-desktop here
Oftentimes this means that XRAY is down, and due to the blocking policy on the pypi repository, unscanned artifacts get blocked. The other time may be due to https://github.com/StatCan/daaas/issues/1520. Where the database seemingly gets stuck in the "sync" and as noted "Indexing may not be available during database sync" which is important for XRAY to do its work.
This could be for a multitude of reasons and some of them can be solved with a simple restart of the statefulset (or forcibly killing the pod and letting it come back online).
Another reason for this is the XRAY data folder getting too full causing the Indexer to stop indexing. In the past we have increased the size of the disk itself (like in that issue above), but we've also just gone and deleted some of the files. For example in this issue we went through some of the older big files and deleted them. After doing this manual cleanup everything (including the database sync) seemed to be working fine. As we do not want to have to do that everytime, we have an issue open to upgrade jfrog which would allow us to make use of setting a retention period for this data
Like with XRAY, a quick possible solution is to restart the statefulset (or forcibly killing the pod and letting it come back online). Having said that, we have had an actual issue with the underlying database before. The solution there was to give the database a gentle push and everything eventually came back online.