- GitOps Bridge: https://github.com/gitops-bridge-dev
- CNCF Slack Channels: #argocd-gitops-bridge, #cnoe-interest
- CNOE: https://cnoe.io
- Previous talks KubeCon: Building a Bridge Between ArgoCD and Terraform
Fork this repository and update the stacks/Pulumi..yaml files with your own values for the following fields
githubOrg
This is your github org or user for your forkgithubRepo
This is the name of the git repo if you change the default name for the forkveleroBucketPrefix
This is the unique S3 bucket namehubStackName
This is the combination of pulumi account and projectpulumiaccount/projectname
cd pulumi/
npm install
- Add an Environment Variable for
PULUMI_ACCESS_TOKEN
or usepulumi login
- Add an Environment Variable for
GITHUB_TOKEN
in your deployment env (local, Github Actions, AWS Code Pipeline, etc;)
- Review
Pulumi.hub.yaml
and update configuration values as you need - You will want to update Stack Files with configuration for Github Repo/Org, as well as AWS Account ID, CIDRs, etc; - Add any extra resources you may need in your given environment
- Run Pulumi Up for the Hub Cluster's Stack
pulumi up --stack hub
- Wait for the Resources to create like VPC, EKS Cluster, and IAM permissions
- Set environment variable
ARGO_IAM_ROLE_ARN
before running next stepexport ARGO_IAM_ROLE_ARN=$(pulumi stack output -s hub -j | jq .outputs.argoRoleArn -r)
- Run
./bootstrap.sh
to install ArgoCD on Hub cluster - Run
git pull
to fetch the filegitops/clusters/hub-cluster.yaml
- Setup kubectl cli
aws eks --region us-east-1 update-kubeconfig --name hub-cluster --alias hub-cluster
- Run
kubectl create -f ../gitops/clusters/hub-cluster.yaml
- Access ArgoCD UI:
echo "Username: admin" echo "Password: $(kubectl -n argocd get secret argocd-initial-admin-secret -o jsonpath="{.data.password}" --context hub-cluster | base64 -d)" echo "Access https://localhost:8080" kubectl -n argocd port-forward svc/argocd-server 8080:443 --context hub-cluster
- Review
Pulumi.dev.yaml
and add any extra resources you may need in your given environment - Run Pulumi Up for the Spoke Cluster's Stack
pulumi up --stack dev
- Wait for the Resources to create like VPC, EKS Cluster, and IAM permissions
- Apply the Secret resource that was added to the GitOps Repository
- Setup kubectl cli
aws eks --region us-east-1 update-kubeconfig --name dev-cluster --alias dev-cluster
- Repeat same steps for the next cluster like
prod
pulumi up --stack prod
pulumi destroy --stack dev
pulumi destroy --stack prod
pulumi destroy --stack hub
- Add Authentication for ArgoCD to be able to grab from your Organization's private repository
- Add ApplicationSets to your configuration by looking at the GitOps Bridge Control Plane Template for resources you need
- Create an ArgoCD Application that manages deployment of your Cluster Secret
- Move your EKS Cluster to be a private access endpoint
root/
.github/ # Contains Github Actions to deploy and preview Pulumi IaC
gitops/ # Contains Gitops Configuration
addons/ # Contains the Application Set Files and Addons we want
platform/ # Contains the platform level addons we want
team/ # Contains the Application Team addons we want
bootstrap/ # Contains the bootstrap application to deploy cluster secrets and applicationsets
charts/ # Contains Helm Charts and default values for configuration
platform/ # Contains Platform Helm Charts and default values
team/ # Contains Application Team Helm Charts and default values
clusters/ # Contains the cluster secret files
overrides/ # Contains Values file overrides
clusters/ # Contains Values file overrides for specific cluster
environments/ # Contains Values file overrides for specific cluster environments
pulumi/ # Contains the Pulumi code for the repository
bootstrap.sh # The bootstrap command to run to setup the hub cluster
Pulumi.hub.yaml # Contains configuration for the Pulumi Stack "hub"
Pulumi.dev.yaml # Contains configuration for the Pulumi Stack "dev"