-
Notifications
You must be signed in to change notification settings - Fork 34
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
♻️ 💅 Cleanup/polish the code to automatically cascade images from the server -> server dependencies #1082
Comments
I have segregated the tasks into two individual groups (CI/CD, Core code) + one combo group(CI/CD + Core) A. CI/CD
B. Core code
C. Core Code + CI/CD
|
Task A-7: Certificates location - External or Internal?Fixed as a part of the redesign changes itself. Added certificates
Since the same base server image is used by the server image dependent containers (webapp, analysis, admin-dash, public-dash-notebook), we have added it right at the source image which is the external server image. This way it is ensured that the certificates are present in all the cascading images. Based on comment below, added them to internal and removed from external. |
@MukuFlash03 right we have implemented this. But I am suggesting that we revisit that decision
|
Task A-6: Switching from $GITHUB_ENV to step outputs using GITHUB_OUTPUTShankari's comments
I couldn’t find any official statement on which is the recommended approach. Note that this doesn’t say that One argument in favor of
|
Task A-2: Avoid uploading date as Artifacts -> Use .env file instead?Shankari's comments
REST API endpoints I looked at some REST API endpoints to see if we can access outputs of jobs/steps in a run outside of the workflow run. Not Suitable
Optimal Approach
Decoding this base64 data (for decoding, the newline characters '\n\ need to be removed and combine the entire encoded string)
|
Task A-2: Avoid uploading date as Artifacts -> Use .env file instead? [Contd.] With the API endpoint mentioned above, we can directly read contents of files. Proposed Approach To have a separate In server repo: Pros of this approach:
|
why use something complicated which retrieves the file as a base64 encoded string instead of just reading the files directly using the |
I just thought of sticking to using Github REST API and was looking for options within that. But reading the |
Refer to details in cleanup issue: Task A-6: e-mission/e-mission-docs#1082 (comment)
Refer to details in cleanup issue: Task A-2: e-mission/e-mission-docs#1082 (comment) Added .env file initialized with the current latest tag of admin-dash and server image. Internal script can read docker image tags directly from .env.tags using curl to fetch raw file contents. Storing server tag as well since admin-dash Dockerfile uses it. Removed workflow dispatch inputs No longer need inputs since reading from .env.tags in server repo directly ------ Read raw file contents directly instead of using REST API REST API endpoint returns base64 encoded data which then needs to be decoded. Can simply read the Raw file contents from the publicly available file. ------ Also changed tag name to match tag name in internal repository This way we can directly use the tag name without having extra fields like envVar in the internal script. ----- For now not removing artifacts until the internal script is updated to handle this change.
…ternal Task A-8: Prefixing branch name to the docker tag along with the date. In the internal script we will not need to maintain the different branch lists as the images will be completely tagged in the external workflows themselves. We can simply use the tags without modifications then. For now, not prefixing the tag to the artifact since we will be removing the artifact anyways. And current internal script works with artifacts. Once I update the internal script, will come back and remove artifacts. Also removing prefixed branch name from frontend image artifact in case of workflow dispatch event since it uses the existing frontend image tag generated during push event which already has prefixed branch name In Dockerfile, removing hardcoded branch name, since in this change, we are already included the branch name in image tag. ---------- Task A-7: Certifcates added to internal Dockerfiles. Refer to issue comment for details: Task A-7: e-mission/e-mission-docs#1082 (comment) The certificates are relevant to our internal AWS configuration and not needed externally. They can be present externally too without having any major effect. But removing them helps keeping the base image clean. Additionally, anyone working with the code can customize with their own certificates if needed or adopt an approach which doesn't even need certificates in the first place.
Refer to details in cleanup issue: Task A-6: e-mission/e-mission-docs#1082 (comment)
Refer to details in cleanup issue: Task A-2: e-mission/e-mission-docs#1082 (comment) Storing latest tag as well so that artifacts are not needed. For now not removing artifacts until the internal script is updated to handle this change. ---- Task A-8: Prefixing branch name to the docker tag along with the date. In the internal script we will not need to maintain the different branch lists as the images will be completely tagged in the external workflows themselves. We can simply use the tags without modifications then. For now, not prefixing the tag to the artifact since we will be removing the artifact anyways. And current internal script works with artifacts. Once I update the internal script, will come back and remove artifacts.
Refer to details in cleanup issue: Task A-6: e-mission/e-mission-docs#1082 (comment)
Refer to details in cleanup issue: Task A-2: e-mission/e-mission-docs#1082 (comment) Added .env file initialized with the current latest tag of admin-dash and server image. Internal script can read docker image tags directly from .env.tags using curl to fetch raw file contents. Storing server tag as well since admin-dash Dockerfile uses it. Removed workflow dispatch inputs No longer need inputs since reading from .env.tags in server repo directly ------ Read raw file contents directly instead of using REST API REST API endpoint returns base64 encoded data which then needs to be decoded. Can simply read the Raw file contents from the publicly available file. ----- For now not removing artifacts until the internal script is updated to handle this change.
…ternal Task A-8: Prefixing branch name to the docker tag along with the date. In the internal script we will not need to maintain the different branch lists as the images will be completely tagged in the external workflows themselves. We can simply use the tags without modifications then. For now, not prefixing the tag to the artifact since we will be removing the artifact anyways. And current internal script works with artifacts. Once I update the internal script, will come back and remove artifacts. In Dockerfile, removing hardcoded branch name, since in this change, we are already included the branch name in image tag. ---------- Task A-7: Certifcates added to internal Dockerfiles. Refer to issue comment for details: Task A-7: e-mission/e-mission-docs#1082 (comment) The certificates are relevant to our internal AWS configuration and not needed externally. They can be present externally too without having any major effect. But removing them helps keeping the base image clean. Additionally, anyone working with the code can customize with their own certificates if needed or adopt an approach which doesn't even need certificates in the first place.
Refer to details in cleanup issue: Task A-6: e-mission/e-mission-docs#1082 (comment)
Refer to details in cleanup issue: Task A-2: e-mission/e-mission-docs#1082 (comment) Storing server tag as well so that artifacts are not needed. Can also remove image tag passed as input in Workflow dispatch POST request. Workflow input also removed in dashboard workflows For now not removing artifacts until the internal script is updated to handle this change. ---- Task A-8: Prefixing branch name to the docker tag along with the date. In the internal script we will not need to maintain the different branch lists as the images will be completely tagged in the external workflows themselves. We can simply use the tags without modifications then. For now, not prefixing the tag to the artifact since we will be removing the artifact anyways. And current internal script works with artifacts. Once I update the internal script, will come back and remove artifacts.
…ally Refer to issue comment for details: Task A-7: e-mission/e-mission-docs#1082 (comment) The certificates are relevant to our internal AWS configuration and not needed externally. They can be present externally too without having any major effect. But removing them helps keeping the base image clean. Additionally, anyone working with the code can customize with their own certificates if needed or adopt an approach which doesn't even need certificates in the first place.
Task A-8: Stored tag format changed to branch_name-timestampCurrently the docker images are tagged as <branch_name>_ but the artifact tags only had the timestamp. However, this causes a problem if the docker image upload is done from a different branch in the external GitHub workflow, since the complete returned tag in the internal script would still have the default hardcoded branch_name but latest timestamp. Hence now storing the same tag that the docker image is tagged with including both the branch_name and timestamp. |
Task A-5: Reusable workflows for unifying workflows to reduce duplicated codeShankari's comments
Initially, I thought that only the dashboard workflows would be affected. But after the first round of changes for this cleanup issue, the Reusable workflows are the perfect way to achieve I have added conditional checks that check for the repo name in the reusable workflow file that determine which statements to execute depending on for which repo the workflow is running. This is used for both push events specific to a repo as well as for the workflow dispatch events triggered on pushes to server repo. Advantages of reusable workflows:
Testing done |
Highlighting some problems faced while implementing and approaches to resolve them: 1.
Hence, I passed in the existing fine-grained token while checking out:
2. After solving the above problem, adding fine-grained token resulted in
Now, the question I asked myself was - why isn't this happening in the current codebase where the exact same workflow file was there? We haven't seen infinite workflow runs due to the commits by GitHub actions to update the .env files. The answer is that currently, in the dashboard workflows that update the .env file, we are using the default Dilemma
Resolution 1. Authorize
2. Prevent infinite workflow runs by checking for latest commit message details In the reusable workflow, these are the git user details used:
In the caller workflows in the join repository and the dashboard repositories, I am now checking for the username:
How the process would look now:
|
Task A-4: Run image_build_push only after tests pass successfullyApproach:
Test Workflow runs
Details Github actions didn't have out of the box solution for running a workflow based on results of multiple workflows where ALL workflows must have completed successfully. workflow_run is there but this triggers the dependent workflow when either of the workflow dependencies defined as prerequisites are completed. Found an alternative suggestion here. These workflows can then be called in jobs in the workflow that needs these workflows to be run before.
Also, removed the |
Task A-1: Switch to storing images in GHCRThis was a fairly easy switch following the documentation. Will need to check the following once the first image is deployed to GHCR:
1. Changes to Dockerfile
2. Workflow file
|
Task A-3: Image build/push on release instead of on every pushThis was a sort of complex change since I had to handle the two different event triggers (workflow_dispatch, release) and how the version tags would need to be updated separately for each of these event types. Converting the main image_build_push workflow to a reusable workflow was a great idea as it saved a lot of time. Else, I would have to repeat the same code and steps and checks for all the repositories. Sample Runs passed: server, join, admin-dash, public-dash
The major changes as a part of this task: 1. Re-building on releases instead of tags
2. Both Tests and Build will happen for push events
3. Dispatch only runs on
4. Additional version increment steps + Creating incremented version tags and releases in workflow Creating release (documentation)
|
Listing reasoning behind certain approaches taken
Latest release: documentation, link Of these, "list all releases" is the correct one for this use case as we want to fetch the releases and not tags. Added more event_name checks for push release and workflow_dispatch for updating tags with either
A. Using
Might need to specify fetch-depth: 0 for actions/checkout to ensure all commits history is fetched.
Reasons for not using this
|
B. Using GitHub REST API endpoints Discussions / Posts: Git tags sort discussion, Github issues As discussed in this comment, using the list of all releases API endpoint.
These commands work in current repo however, whereas we want to sort the tags received from curl response from the server release tags. So for our case would look like:
|
All PRs tracker (#1082 (comment)) |
Copied over pending tasks from:
e-mission/e-mission-server#961 (comment)
ghcr.io
so that we don't run into issues with usage limits.env
file in the dashboards. Can't this just be an output of the run?secret
auth method since it is unusede-mission-common
to the various repose-mission-core
as a separate library that is similar toe-mission-common
e-mission-core
into the dashboard repos then; we don't need the full analysis pipeline<main_branch>_<date>
but the uploaded/stored tag is only thedate
, so we need to prepend the branch externally. And the branch is not always the same.The text was updated successfully, but these errors were encountered: