Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Redesign Build Deployment Process (External) #125

Merged
merged 73 commits into from
Aug 14, 2024

Conversation

MukuFlash03
Copy link
Contributor

@MukuFlash03 MukuFlash03 commented Mar 26, 2024

This PR serves as the implementation for this issue for redesigning our build and deployment processes across e-mission-server, join, admin-dash, public-dash primarily.

Collaborating on this along with @nataliejschultz .

All four redesign PRs for the external repos:
E-mission-server: e-mission/e-mission-server#961
Join: e-mission/nrel-openpath-join-page#29
Admin-dash: e-mission/op-admin-dashboard#113
Public-dash: #125


A rough plan can be:

1st phase

  • Consolidating Differences
    • Minimize the differences in internal and external versions, cleanup unnecessary files, so on.
  • Image_Build_Push.yml:
    • CI / CD via GitHub actions to build and push images to Dockerhub for all four repos based on the current process for e-mission-server.
    • Image build push only builds for that specific repo when merge is made to a specific branch.
    • Additionally, for e-mission-server, image-build-push will now also need to trigger the GitHub action in join, admin-dash, public-dash.

2nd phase:

  • Multi-tier uniform repo structure in nrelopenpath
    • Need to change the internal repos to pull from the externally built images for each repo without duplicating the entire external code repository internally.

MukuFlash03 and others added 9 commits March 18, 2024 16:47
…ed viz_scripts Dockerfile

viz_scripts Dockerfile contains ENV variables from docker-compose as well.

Added docker image commands to image_build_push yml for dashboard and notebook images
Added docker commands for dashboard and notebook  + Updated Dockerfile
1. Sed to jq change to make it consistent to what is being used in internal repos.

2. Renamed image pushed to docker hub.
Changed sed to jq + Renamed docker image in image_push
Currently the branch specified - "image-push-merge" is available locally on my system.

I use it to test the automated docker image push mechanism whenever any changes are merged to this branch.

Once, everything looks good, need to change this to master or main as per the repo.
Added TODO to change image push branch
Had added it initially for testing purposes.
Can remove now so it doesn't expose any sensitive info.
Removed printing Docker username
@MukuFlash03
Copy link
Contributor Author

Checking in the first initial set of changes for consolidating the differences in the external and internal versions.
These are based on the differences identified here

Current changes involve:

  1. Sed changed to jq
    Changed it in this script viz_scripts/docker/start_noteboook.sh to match the way it is being done internally.
    TODO: Might not even need to do this, once we decide how to handle environment variables, and might get rid of using the respective conf files at all.

These changes are still pending:
viz_scripts/docker/start_noteboook.sh: Python notebooks run directly internally instead of cron tab
viz_scripts/Dockerfile: AWS certificates.

Mahadik, Mukul Chandrakant and others added 2 commits March 28, 2024 15:53
1. docker/cert.sh
- Uses new environment variable PROD_STAGE variable to determine whether in staging / production environment and only then install certificates, else skip.

2. Dockerfile
Added environment variables here, keeping the same default ENV values as the ones in docker-compose.yml.
Not adding in docker/Dockerfile.dev, since this change is being done primarily with the objective to aid with the automated build and push to Dockerhub.
This pushed image would then be the one that would be used in the internal environments as the base image, which would be based on the non-dev Dockerfile.
Added cert.sh + Modified Dockerfiles
@nataliejschultz
Copy link
Contributor

I wanted to comment on the addition of cert.sh to the repo. This is a file that is better to be kept internally only, and will be added to the internal container build by using the prod/jenkins variable. However, we can keep it external for testing.

For example, we can add our prod/jenkins variable to our GitHub action, and see what it looks like when the different portions of code are run according to our script. This way we can perfect it before modifying the internal repos.

Mahadik, Mukul Chandrakant and others added 6 commits April 2, 2024 17:12
The other ENV vars that were added in Docker-compose files are currently present in the Dockerfile in the external repo root.
Removing them from there and can add them as per requirement.
For local testing, add them when “docker run” is used using the -e flag.
For usage in stage / production, can be set by cloud team in AWS Codebuild as they currently do.

CRON_MODE
CRON_MODE can be moved to setting when docker run command is executed using -e flag.
This is because this is required in start_notebook.sh in CMD layer in Dockerfile which is executed when the container is run.
Hence, CRON_MODE value can be supplied later and not needed to be embedded in image.

PROD_STAGE
Not setting PROD_TRUE in docker run command since it is required during docker image build to decide whether to add certificates or not.
Hence, adding it in the Dockerfile so it’s available during docker build.
Removing ENV variables from Dockerfile
Can directly set DB_HOST since we can now use environment variables.
No need to use jq or sed to replace file contents and copy over files.
Removed sed / jq usage from start scripts
Created a new branch image-push-merge for e-mission-server in my forked repo.
Also modified image push workflow to build and push docker image to docker hub on push to image-push-merge branch.

Doing this since admin-dash was failing when was building from internal repo since this was still referring to old server code.
Changed Dockerfile and docker/Dockerfile.dev in public-dash and admin-dash to build from this new image
mukuflash03/e-mission-server:image-push-merge_2024-04-12--01-01

Redesigned server image is built from the image-push-merge branch on my personal forked repository.
This branch has a workflow run set up to build the docker image and push it to Dockerhub whenever a push or merge happens to image-push-merge branch.
Currently, I've been pushing to image-push and then creating a PR to merge into image-push-merge.

Doing this, so admin-dash and public-dash can build from the latest redesigned server code.
This is not the latest server code but the latest changes from my redesign PR.
Changing to build from base server image from my personal Dockerhub repository with redesigned server code.
Will need to change to build from Shankari's Dockerhub repository, once all changes are final.
Copy link
Contributor

@nataliejschultz nataliejschultz left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

More reminders and comment for why we need the script.

.github/workflows/image_build_push.yml Outdated Show resolved Hide resolved
viz_scripts/Dockerfile Outdated Show resolved Hide resolved
viz_scripts/docker/Dockerfile.dev Outdated Show resolved Hide resolved
viz_scripts/docker/cert.sh Outdated Show resolved Hide resolved
@MukuFlash03
Copy link
Contributor Author

Docker Volume usage

When testing initially, I was unable to see generated plots in the frontend dashboard page.
This is when I learnt about how volumes defined in docker-compose actually work.

So, I was testing with the internal repos,

  • Created folders in internal nrelopenpath multi-tier.

    • Created public_dash directory and moved frontend and viz_scripts inside it.
    • Created .gitignored common plots directory in public-dash.
  • Added -v flags to both frontend and viz_scripts containers docker run commands.

  • This creates two-way connection - any changes in either of the directories before and after colon in -v flag will be updated when changes occur in the other.

    • Thus, viz_scripts should update and store plots in its internal docker plots/
    • Due to volume connection, this should update and also store plots in external local or host plots/
    • Again, frontend is also connected to the external volume, so it should also have access to plots/ in its internal docker public/plots/

Great! It’s working. Plots visible.

  • Plots do have the label - “Unable to generate plots”, though which come up as empty tables with 0 (zero) values.

    • Hopefully it’s there for internal repo main branch as well.
      • Yes, same message of missing plots. Perhaps unable to read from db as no data loaded. [Confirmed, this was the reason.]
  • Run “docker run” command inside nrelopenpath/public_dash/ directory, else plots/ created outside public_dash in root nrelopenpath folder.

    • No issues as such, but let’s keep all public-dash stuff inside its own directory.

MukuFlash03 pushed a commit to MukuFlash03/em-public-dashboard that referenced this pull request Aug 14, 2024
Workflow dispatch event is triggered on changes to server image.
This should build the notebook server image as it uses the server image.

But frontend is Javascript nodejs based image and is unrelated to the server.
Check this review comment:
e-mission#125 (comment)
Mahadik, Mukul Chandrakant added 4 commits August 13, 2024 19:59
Earlier commit had changed it to use PROD file but this file doesn't exist.
The prod version of the file we have is docker-compose.yml instead.
Using dev version only.
Workflow dispatch event is triggered on changes to server image.
This should build the notebook server image as it uses the server image.

But frontend is Javascript nodejs based image and is unrelated to the server.
Check this review comment:
e-mission#125 (comment)
MukuFlash03 pushed a commit to MukuFlash03/op-admin-dashboard that referenced this pull request Aug 14, 2024
Similarly done with public-dashboard.
See comment:
e-mission/em-public-dashboard#125 (comment)
Mahadik, Mukul Chandrakant and others added 2 commits August 13, 2024 22:41
This was mainly needed for Push event but since Workflow dispatch event would be setting the latest server image tag in .env file, the push event can read from this file directly.
Copy link
Contributor

@shankari shankari left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@MukuFlash03 one more minor change. But changing the repo name later will be complicated, so let's try to get this right the first time.

docker image tag em-pub-dash-prod/frontend:latest $DOCKER_USER/${GITHUB_REPOSITORY#*/}:${GITHUB_REF##*/}_${{ steps.date.outputs.date }}
docker image tag em-pub-dash-prod/viz-scripts:latest $DOCKER_USER/${GITHUB_REPOSITORY#*/}_notebook:${GITHUB_REF##*/}_${{ steps.date.outputs.date }}
if [ "${{ github.event_name }}" == "push" ]; then
docker image tag em-pub-dash/frontend:latest $DOCKER_USER/${GITHUB_REPOSITORY#*/}:${GITHUB_REF##*/}_${{ steps.date.outputs.date }}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Let's make sure that both the notebook image and the webserver image are labeled. Otherwise just seeing a "dashboard" image will cause confusion.

Suggested change
docker image tag em-pub-dash/frontend:latest $DOCKER_USER/${GITHUB_REPOSITORY#*/}:${GITHUB_REF##*/}_${{ steps.date.outputs.date }}
docker image tag em-pub-dash/frontend:latest $DOCKER_USER/${GITHUB_REPOSITORY#*/}_frontend:${GITHUB_REF##*/}_${{ steps.date.outputs.date }}


- name: push docker images
run: |
docker push $DOCKER_USER/${GITHUB_REPOSITORY#*/}:${GITHUB_REF##*/}_${{ steps.date.outputs.date }}
if [ "${{ github.event_name }}" == "push" ]; then
docker push $DOCKER_USER/${GITHUB_REPOSITORY#*/}:${GITHUB_REF##*/}_${{ steps.date.outputs.date }}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
docker push $DOCKER_USER/${GITHUB_REPOSITORY#*/}:${GITHUB_REF##*/}_${{ steps.date.outputs.date }}
docker push $DOCKER_USER/${GITHUB_REPOSITORY#*/}_frontend:${GITHUB_REF##*/}_${{ steps.date.outputs.date }}

@shankari
Copy link
Contributor

Squash-merging again (similar to e-mission/e-mission-server#961 (comment)) since otherwise we have 73 commits for 8 files.

@shankari shankari merged commit ac676e9 into e-mission:main Aug 14, 2024
@shankari
Copy link
Contributor

shankari commented Aug 14, 2024

@MukuFlash03 This push generated a build which failed: https://github.com/e-mission/em-public-dashboard/actions/runs/10393542152
Seems to be related to the recent changes around not building the frontend for workflow_dispatch.
Did you test those changes?

@MukuFlash03
Copy link
Contributor Author

MukuFlash03 commented Aug 14, 2024

This push generated a build which failed: https://github.com/e-mission/em-public-dashboard/actions/runs/10393542152

We are using the non-dev version of the docker compose file to build the image.
This renames the image with the suffix :

dashboard:
    image: em-pub-dash-prod/frontend

The fix should be to simply just change the name for both frontend and notebook images to add the prod suffix.

@MukuFlash03
Copy link
Contributor Author

MukuFlash03 commented Aug 14, 2024

@nataliejschultz had added the commit but it was using the incorrect file name.

So I changed the file name in this commit. But I ended up changing the image name as well in this commit, which is wrong.

@shankari
Copy link
Contributor

@MukuFlash03 Note that, with these changes, the frontend and notebook images will have different tags. While you are changing this, you may want to change the final step to upload an artifact with two tags (or two artifacts, one for each image). @nataliejschultz will then have to change the internal script to handle both tags

@MukuFlash03
Copy link
Contributor Author

@MukuFlash03 Note that, with these changes, the frontend and notebook images will have different tags. While you are changing this, you may want to change the final step to upload an artifact with two tags (or two artifacts, one for each image). @nataliejschultz will then have to change the internal script to handle both tags

We do not need changes for this since the tag is the timestamp generated:

    - name: push docker images
      run: |
        if [ "${{ github.event_name }}" == "push" ]; then
          docker push $DOCKER_USER/${GITHUB_REPOSITORY#*/}_frontend:${GITHUB_REF##*/}_${{ steps.date.outputs.date }}
        fi
        docker push $DOCKER_USER/${GITHUB_REPOSITORY#*/}_notebook:${GITHUB_REF##*/}_${{ steps.date.outputs.date }}
   
    - name: Create a text file
      run: |
          echo ${{ steps.date.outputs.date }} > public_dash_tag_file.txt
          echo "Created tag text file"

This is the same for all images in the workflow run. So internally, the image name without the timestamp differs for frontend and notebook images which is hardcoded anyways as it is static. Only the timestamp tag changes based on the workflow run.

MukuFlash03 pushed a commit to MukuFlash03/em-public-dashboard that referenced this pull request Aug 14, 2024
@shankari
Copy link
Contributor

shankari commented Aug 14, 2024

This is the same for all images in the workflow run. So internally, the image name without the timestamp differs for frontend and notebook images which is hardcoded anyways as it is static. Only the timestamp tag changes based on the workflow run.

@MukuFlash03 For the workflow_dispatch runs, it is not.
Consider the following:

  • push: both images are tagged with 2024-08-14
  • new server build on 2024-08-16, viz_script is tagged with 2024-08-16, frontend is still at 2024-08-14

@MukuFlash03
Copy link
Contributor Author

I see.
In that case, @nataliejschultz can handle this in the internal script by checking for the type of trigger event.
I did inspect the response from the GitHub REST API (URL for my branch workflows)

The event type is available:

      "run_number": 49,
      "event": "push",
      "status": "completed",
      "conclusion": "success",
      "workflow_id": 90180351,
....
....
....
      "run_number": 31,
      "event": "workflow_dispatch",
      "status": "completed",
      "conclusion": "success",
      "workflow_id": 90180351,
``

@shankari
Copy link
Contributor

@MukuFlash03 Just because something is possible, doesn't mean that it is the elegant approach.

If you see the script that @nataliejschultz I think it would be a lot easier to just download two archive files. Note that she has a common function to download the archive file which is invoked once for each tag. having to looking at the type for only this repo would make that script a lot more ugly.

@MukuFlash03
Copy link
Contributor Author

I tried implementing the logic for handling two different tags for frontend and notebook but it's not correct.
I'm trying to think it through but we don't maintain the separate tags for frontend / notebook in the workflow currently.


For push events, we can have two artifacts. They'll have the same timestamps as tags.
For workflow, for the notebook image we'll use the latest timestamp for the 1st artifact.

But what should be passed as the 2nd artifact for frontend image?
We'll need to retrieve the last stored tag for frontend from somewhere?

@MukuFlash03
Copy link
Contributor Author

I tested the end-to-end flow with workflow dispatch from the server repo in these runs:

Well, two tags are generated in the public-dash workflow run, but the tags are wrong.

Notebook tag is correct since it is fetched from current timestamp in either case.

But frontend tag is storing the server image tag.
It should store last tagged frontend image.


So, as mentioned in this comment, I'm thinking how to retrieve the last tag for frontend image. This will need to be stored / maintained somewhere as well.

@shankari
Copy link
Contributor

shankari commented Aug 14, 2024

I can think of tons of options - here are two off the top of my head

  • you can just save it in a different .env file which is overridden on push but not on workflow_dispatch
  • Or you can remove the optimization and rebuild both images every time, arguing that it makes the downstream flow easier and that the frontend image size is small (you have to check and actually make that argument)

@MukuFlash03
Copy link
Contributor Author

Frontend image is small ~ 50 MB [Source: My dockerhub account]
But I've gone ahead and implemented it using a different .env file.

Screenshot 2024-08-14 at 3 43 43 PM

@MukuFlash03
Copy link
Contributor Author

  1. Push event run

This workflow run was triggered by a PUSH event.
The tags were updated correctly and two artifacts produced (FRONTEND, NOTEBOOK).


  1. Workflow Dispatch run [server, public-dash, admin-dash]

Next, I triggered a workflow run from e-mission-server via Workflow Dispatch event.

This was able to read in the current FRONTEND image tag from the .env.repoTags file and upload the existing value as an artifact.
NOTEBOOK tag on the other hand is updated normally as it should irrespective of event type.

shankari pushed a commit to e-mission/op-admin-dashboard that referenced this pull request Aug 15, 2024
* Create image_build_push.yml

* Corrected Dockerfile path

* Added Dockerfile in repo root.

Need to make it available outside the docker folder as the build context was specified in the docker compose as the repo root directory and corresponding paths in the Dockerfile were relative to the root directory.

Additionally, added Environment variables defined in docker compose file.

The image wasn't running directly after building because these things were missing:
- needed to add db container as well
- ports needed to be exposed for both db and dashboard container
- common network needed to be added under which both the containers had to be linked
- all environment variables had to be added, especially important ones like DB_HOST

* Replaced config.py with config-fake.py in external Dockerfile.

Encountered error while building and pushing docker image as a part of the CI.
Due to config.py mentioned in COPY command in Dockerfile not being there in the repo online.
Added config-fake.py in its place for now, so image is pushed to Dockerhub.
But eventually, will need to handle as per decision to consolidate differences in external and internal repos.

* Copy config-fake.py as config.py

In the external public Docker image, make the config-fake.py available as config.py by copying it over into the container as config.py

Internally, if custom config.py is needed, can copy it over in the internal Dockerfile.

But might not even need the config.py

* Removed expose port duplication

Found usage of two variables referring to same port.
Had exposed using both variable names.
But since same port exposed, should be fine to expose just once.

* Multiple changes for external repo differences

1. Changes in related files including Dockerfile for loading OpenPATH logo from local image rather than URL link.

2. Changed sed to jq in start.sh.

3. Changed dash version to latest v 2.16.1

4. Added new Dockerfile for image_build_push yml.

Removed assets/ from COPY command as only contents of assets/ were being copied to root app directory.

In COPY assets command, copy entire directory instead of just css file.

* Added TODO to change image push branch

Currently the branch specified - "image-push-merge" is available locally on my system.
I use it to test the automated docker image push mechanism whenever any changes are merged to this branch.
Once, everything looks good, need to change this to master or main as per the repo.

* Removed printing Docker username

Had added it initially for testing purposes.
Can remove now so it doesn't expose any sensitive info.

* Replacing CognitoConfig class with ENV variables

- Option 1: Can we read in these values directly from environment variables instead of reading from a class?

- Option 2: Create a class that makes all environment variables available as python variables

For now, I’m going ahead with reading in these variables directly.
So, will need to replace all uses of CognitoConfig class with variable name directly in files that use it.

Shankari mentioned in a commit whether we even need config.py?
927817a

Also, removed references to both config-fake.py and config.py in the Dockerfiles.

* Replaced dictionary in config.py

Added actual template value directly where the variable was being used through the config.py file.

* Removing ENV variables from Dockerfile

The other ENV vars that were added in Docker-compose files are currently present in the Dockerfile in the external repo root.

Removing them from there and can add them as per requirement.

For local testing, add them when “docker run” is used using the -e flag.

For usage in stage / production, can be set by cloud team in AWS Codebuild as they currently do.

* Update requirements.txt

Changed pillow version to 10.3.0
Editing online manually to test if Github action still works after re-adding image_build_push.yml file.

* Removed sed / jq usage from start scripts

Can directly set DB_HOST since we can now use environment variables.
No need to use jq or sed to replace file contents and copy over files.

* Changing base image to build from redesign server image

Created a new branch image-push-merge for e-mission-server in my forked repo.
Also modified image push workflow to build and push docker image to docker hub on push to image-push-merge branch.

Doing this since admin-dash was failing when was building from internal repo since this was still referring to old server code.
Changed Dockerfile and docker/Dockerfile.dev in public-dash and admin-dash to build from this new image
mukuflash03/e-mission-server:image-push-merge_2024-04-12--01-01

Redesigned server image is built from the image-push-merge branch on my personal forked repository.
This branch has a workflow run set up to build the docker image and push it to Dockerhub whenever a push or merge happens to image-push-merge branch.
Currently, I've been pushing to image-push and then creating a PR to merge into image-push-merge.

Doing this, so admin-dash and public-dash can build from the latest redesigned server code.
This is not the latest server code but the latest changes from my redesign PR.

* Bumped up base server image tag

Changing to build from base server image from my personal Dockerhub repository with redesigned server code.
Will need to change to build from Shankari's Dockerhub repository, once all changes are final.

* Bump up base server image tag

* Artifact download test - 1

Added working code from join repo to fetch docker image tags using artifact download.

* Bumped up server image tag

Bumped up after fixing "url" KeyError bug in this commit in server repo:
MukuFlash03/e-mission-server@e778b3f

* Artifact + Matrix - 1

Combining both artifact and matrix methods since both workflow is triggered by both push event and workflow_dispatch event.

Workflow dispatch event receives tag via input parameter sent via server workflow.
Push event does not receive any tag and the DOCKER_IMAGE_TAG_2 would have empty value since workflow triggering event and hence input parameter would be empty.
So, was facing the issue of having empty timestamp being suffixed to the docker image tag in the Dockerfiles.

Hence, using the logic of fetching the latest run id and then downloading the artifact uploaded by server workflow to ensure that a non-empty value is also retrieved for the timestamp.
This value is stored in DOCKER_IMAGE_TAG_1 and can be used for building docker image.

Now, depending on the trigger event, the appropriate docker image tag can be used in the docker build command for the --build-arg flag.

Additionally, Dockerfiles now use ARG to use the tags passed from the docker build command, hence using environment variables.

Docker-compose files similarly have the args config parameter set.

Developers would have to set the correct server image tags manually here for the docker-compose command to pass the value to the ARG in the Dockerfile and correctly set the image tag to point to the appropriate server image.
Need to mention somewhere in the ReadME.md to refer to the server image list in Dockerhub to choose the latest image tag. While pushing the image in the GitHub actions, it'll be done manually.

Todo: Change branch name to tags-combo-approach in fetch_runID.py

* Artifact + Matrix - 2

Was unable to see docker_image_tag in echo statement in logs.
Added docker_image_tag print statement to see retrieved image tag.
Ah! Was using run_id instead of docker_image_tag as the output value in outputs section in fetch_tag job.

* Artifact + Matrix - 3

Still not seeing the env vars echo-ed.
Debugging by printing various log statements.

* Artifact + Matrix - 4

Still not seeing the env vars echo-ed.
Debugging by printing various log statements.

* Artifact + Matrix - 5

Working finally!
Changed the deprecated set-output command to use {key}={value} format with echo command.

Also, adding commented out build and push commands to see if build, push is successful.
For this commit, not changing branch = tags-artifact in fetch_runID.py
Once, server code has latest working yml file, then will come back and push another commit to change branch to tags-combo-approach.

* Artifact + Matrix - 6

Removed an extra echo statement.

* Artifact + Matrix - 7

Updating Dockerfiles to use ARG environment variable with latest timestamp that will be passed through:
- `docker build --build-arg` command in Github actions in the workflow for automated pushing to Docker hub.
- `args: ` config field in docker-compose which will need to be set manually by developers locally.

Also, changing branch in fetch_runID and Dockerfiles to tags-combo-approach.

* Artifact + Matrix - 8

For public-dash, admin-dash where ARGS are now being used, need to add the args under build command in the docker compose files.
Gives error if arg is at the same hierarchical level as build.

Also, public-dash docker-compose.yml (non-dev) version changed to have build: context, dockerfile ; unlike only build: frontend.
This allows adding args under build. Similar to how currently being built in docker-compose.dev.yml.

Also, args to be added under notebook-server and not dashboard since viz_scripts builds off of server image and not frontend, which is a node image.

* Artifact + Matrix - 9

Adding .env file which stores only docker image timestamp for the latest dockerhub e-mission-server image already pushed.

.env file overwritten in both types of trigger events - push and workflow_dispatch.

Added commit and push github actions as well for pushing latest changes to the .env file made via the workflow.

Lastly, docker-compose now also mentions the ENV variable name to be read from the .env file for the ARG value in the Dockerfile.

No changes required in the Dockerfiles.

* Updated docker image tag in .env to the latest timestamp:

* Updated docker image tag in .env to the latest timestamp: 2024-05-03--14-37

* Added TODOs in github actions workflow YAML file.

Reminder for things to change as per master branch of e-mission-server once changes are finalized.

* Artifact + Matrix - 10

Added another TODO.

evious Push event triggers run failed

Error occurred in GitHub actions git add, commit, push step.
If file with no changes operated upon, it leaves an error:
“nothing to commit, working tree clean
Error: Process completed with exit code 1.”

Need to fix.

——

Quick fix is to make changes to .env file only if workflow_dispatch event is the trigger.
Don’t do anything for push event.
So, in case anyone modifies .env file on their own by using their own timestamp during testing, and pushes it as a part of their PR, then Shankari will have to ask them to revert the changes.
Else, their custom timestamp will make it to the repo code base.

Found something:
https://www.reddit.com/r/github/comments/ju3ipr/commit_from_github_action_only_when_changes_exist/

It should work but there’s a drawback of using “exit 0” - it will mask all errors generated during “git commit”.
This is bad and we won’t be able to see the reason why something wrong happened as the workflow would be shown as successful with a green tick.

Found a solution with git diff:
https://github.com/simonw/til/blob/main/github-actions/commit-if-file-changed.md

$ git diff --quiet || (git add README.md && git commit -m "Updated README")

However, I won’t be able to log any message saying that no changes to commit, tag not modified.
Hence, will possibly use just “git diff —quiet” with an if-else block.

Expected results:
- Push event triggers workflow.
- It writes DOCKER_IMAGE_TAG_1 fetched from last successful completed run to .env file.
- It sees that there is a difference in the latest committed .env file in the dashboard repo which includes older timestamp.
- Hence it runs git commit part of the script to reset to latest server timestamp.

* Updated docker image tag in .env file to the latest timestamp

* Updated docker image tag in .env file to the latest timestamp

* Updated docker image tag in .env file to the latest timestamp

* Update Dockerfile

Reverting the Dockerfile tag for testing so that checks can run properly

* Syntax

Removing a space.

* Updated docker image tag in .env file to the latest timestamp

* Update Dockerfile again

Also editing this file so that checks pass for now

* Finalize Dockerfiile

Changing this to build from actual e-mission server in hopes of merging!

* Update Dockerfile in docker folder

Potentially finalizing the dockerfile in the docker repo.

* Update fetch_runID.py

Removing prints/comments, updating head_branch and download_ url.

* Cleanup image_build_push.yml

Removing comments and changing branches so it's ready for merge.

* Removing docker build, updating DOCKER_IMAGE_TAG --> SERVER IMAGE TAG, removing redundant db setting

* Reverting for testing

Need to test some changes to the workflows, so temporarily reverting these changes to work with mukul's branches.

* Trying to trigger run without matrix

Changing branches to try to trigger run and see if I can test docker compose

* Changing repo name

* Updated docker image tag in .env file to the latest timestamp

* So close to trying docker compose with actions

Runner couldn't find the file (because I used the wrong filename)

* Build context change

* dockerhub repos are case sensitive.

* Updating tag

* Rename docker tag

Docker image created locally is not taking the name that I passed in for it. Hopefully, every new image created in an individual runner is named the same thing, so that this workaround is successful.

* Add artifact

Adding artifact upload for internal repo to be able to pull image tag.

* Trying to use admin dash tag

Trying to do this the way I originally planned.

* Reverting changes

Reverting the changes I made to test docker compose

* Remove extra dockerfile

* server image tag update

* Update .env

Adding other env vars as a template

* Update image_build_push.yml

Adding other env vars to .env. Otherwise, the file will get overwritten with the server tag only every time.

* Update start.sh

Remove extraneous fi

* Switching to build prod compose

Because that makes sense.

* Updated username to clarify that env file is being updated

* Modified tag variable names to be more relevant

These store tags differently depending on the trigger event - Push OR Workflow dispatch

* Removing redundant pip install

* Certificates added in Dockerfile

Similarly done with public-dashboard.
See comment:
e-mission/em-public-dashboard#125 (comment)

* Fixed indentation

* Added .env.cognito.template + Removed cognito variables from .env and workflow file

The .env file is meant to be used only for the docker image tag to store the latest server image tag.
If other variables are stored there, the workflow will overwrite the .env file to store only the image tag whenever the CI/CD runs.

Hence, separating out the Cognito variables in a separate template file.
I am expecting the users would either set them directly in the system or use the docker compose prod yml file.

* Removing artifact method

This was mainly needed for Push event but since Workflow dispatch event would be setting the latest server image tag in .env file, the push event can read from this file directly.

* Updating latest server image tag

This is probably the first and only time we'll need to manually change the image tag.
Hence forth, the CI/CD should automatically update the .env file with the latest tag whenever the workflow dispatch is triggered on new merges to server.

* Moving .env file update and git commit to the end

This will ensure that if the CI/CD pipeline fails in any prior steps such as the docker related ones, the .env file isn't updated.
This is because the the docker failures can include errors image not found which can occur due to incorrect tags.

---------

Co-authored-by: Mahadik, Mukul Chandrakant <[email protected]>
Co-authored-by: GitHub Action <[email protected]>
Co-authored-by: Natalie Schultz <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
Archived in project
Development

Successfully merging this pull request may close these issues.

4 participants