⏱ Estimated time: 45 minutes
- Setup
git clone https://github.com/apollosolutions/reference-architecture.git
cd reference-architecture
git pull
- GCloud CLI
- Optional: Helm
- Github
- Apollo GraphOS
- If using a cloud provider:
- Google Cloud
- Must have a project with billing enabled
- AWS with billing enabled
- Google Cloud
- Google Cloud project ID
- Github personal access token
- Settings > Developer Settings > Personal Access Tokens
- Grant it permissions to the following scopes:
repo
(for creating repos)delete_repo
(for cleanup at the end)
- Apollo GraphOS Personal API key
- AWS Access Key and Secret for use with the AWS CLI*
- Additionally, ensure you either:
- Set the default region during the AWS CLI configuration
- Set the
AWS_REGION
environment variable when running commands
- Additionally, ensure you either:
- Github personal access token
- Settings > Developer Settings > Personal Access Tokens
- Grant it permissions to the following scopes:
repo
(for creating repos)delete_repo
(for cleanup at the end)
- Apollo GraphOS Personal API key
* Please note to use an account with Administrator privileges, or at minimum, the ability to run:
- Terraform, which creates:
- IAM user and policy
- EKS cluster and node groups, and associates IAM permissions to Kubernetes service accounts
- VPC and subnets
First, change directories in the cloud provider you wish to use. All Terraform is within the terraform
root level folder, with each provider having a subfolder within. For the below examples, we'll assume GCP, however the others will use the same commands.
Next, make a copy of .env.sample
called .env
to keep track of these values. You can run source .env
to reload all environment variables in a new terminal session.
# in either terraform/aws or terraform/gcp
cp .env.sample .env
Edit the new .env
file:
export PROJECT_ID="<your google cloud project id>" # if using AWS, you will not see this line and can omit this
export APOLLO_KEY="<your apollo personal api key>"
export GITHUB_ORG="<your github account name or organization name>"
export TF_VAR_github_token="<your github personal access token>"
Run this script to create your graph and get environment variables for GraphOS:
# in the respective terraform/ folder
source .env
./create_graph.sh
The script adds a few more environment variables to .env
, so reload your environment using:
source .env
gcloud components update
gcloud components install gke-gcloud-auth-plugin
gcloud auth login
gcloud config set project ${PROJECT_ID}
gcloud services enable \
container.googleapis.com \
secretmanager.googleapis.com \
cloudasset.googleapis.com \
storage.googleapis.com
gh auth login
aws configure
gh auth login
Optional: how do I specify a different name for clusters and repos? (The default is "apollo-supergraph-k8s".)
Before running create_graph.sh
, setup_clusters.sh
, or terraform apply
export the prefix as as environment variables:
export CLUSTER_PREFIX=my-custom-prefix
export TF_VAR_demo_name=$CLUSTER_PREFIX
Have you run this tutorial before?
You may need to clean up your Github packages before creating new repos of the same name. Visit https://github.com/<your github username>?tab=packages
and delete the packages created by the previous versions of the repos.
Note: If using a cloud provider, the following commands will create resources on your cloud provider account and begin to accrue a cost. The reference infrastructure defaults to a lower-cost environment (small node count and instance size), however it will not be covered by either of GCP's or AWS's free tiers.
# for example, if using GCP
cd terraform/gcp
terraform init # takes about 2 minutes
terraform apply # will print plan then prompt for confirmation
# takes about 10-15 minutes
Note: If using GCP, you might get an Invalid provider configuration (no credentials loaded)
error when running terraform apply
, please run gcloud auth application-default login
and try again.
Expected output:
kubernetes_cluster_names = {
"dev" = "apollo-supergraph-k8s-dev"
"prod" = "apollo-supergraph-k8s-prod"
}
repo = "https://github.com/you/reference-architecture"
What does this do?
Terraform provisions:
- Two Kubernetes clusters (dev and prod)
- The GitHub repository (
<your org>/reference-architecture
) - GitHub action secrets for GCP/AWS and Apollo credentials
The subgraph repos are configured to build and deploy to the dev
cluster once they're provisioned. (The deploy will fail the first time. See "Note about "initial commit" errors" below.)
After creating the necessary clusters, you will need to run the included cluster setup script:
# for example, if using GCP
cd terraform/gcp
./setup_clusters.sh # takes about 2 minutes
What does this do?
For both dev
and prod
clusters:
- Configures your local
kubectl
environment so you can inspect your clusters - For GCP users:
- Configures namespace, service account, and role bindings for Open Telemetry and Google Traces.
- For AWS users:
- Configures load balancer controller policy and IAM service account
After this completes, you're ready to deploy your subgraphs!
gh workflow run "Merge to Main" --repo $GITHUB_ORG/reference-architecture
# this deploys a dependency for prod, see note below
gh workflow run "Deploy Open Telemetry Collector" --repo $GITHUB_ORG/reference-architecture
Note about "initial commit" errors
When terraform creates the repositories, they immediately kick off initial workflow runs. But as the secrets needed are not available at that point, the "initial commit" runs will fail. As a result, we're just re-running them with the commands above to ensure the environments are correctly deployed.
You can try out a subgraph using port forwarding:
kubectx apollo-supergraph-k8s-dev
kubectl port-forward service/graphql -n checkout 4001:4001
Then visit http://localhost:4001/.
Commits to the main
branch of the subgraph repos are automatically built and deployed to the dev
cluster. To deploy to prod, run the deploy actions:
gh workflow run "Manual Deploy - Subgraphs" --repo $GITHUB_ORG/reference-architecture \
-f version=main \
-f environment=prod \
-f dry-run=false \
-f debug=false
kubectx apollo-supergraph-k8s-prod
kubectl port-forward service/graphql -n checkout 4001:4001
Then visit http://localhost:4001/. You've successfully deployed your subgraphs! The next step is to deploy the Apollo Router and Coprocessor.
To do so, we'll need to run:
gh workflow run "Deploy Coprocessor" --repo $GITHUB_ORG/reference-architecture \
-f environment=dev \
-f dry-run=false \
-f debug=false
gh workflow run "Deploy Coprocessor" --repo $GITHUB_ORG/reference-architecture \
-f environment=prod \
-f dry-run=false \
-f debug=false
First, and once the deploy completes, we'll deploy the router:
gh workflow run "Deploy Router" --repo $GITHUB_ORG/reference-architecture \
-f environment=dev \
-f dry-run=false \
-f debug=false
gh workflow run "Deploy Router" --repo $GITHUB_ORG/reference-architecture \
-f environment=prod \
-f dry-run=false \
-f debug=false
Which will deploy the router and coprocessor into both environments (dev
and prod
), as well as an ingress to access the router on both. In the case of AWS, it will be a domain name, and in the case of GCP, it'll be an IP.
Follow the below instructions for your cloud provider you are using. Please note that for both providers, the value for the ingress may take some time to become live, so you may need to give it a few minutes to process.
kubectx apollo-supergraph-k8s-prod
ROUTER_HOSTNAME=http://$(kubectl get ingress -n router -o jsonpath="{.*.*.status.loadBalancer.ingress.*.ip}")
open $ROUTER_HOSTNAME
kubectx apollo-supergraph-k8s-prod
ROUTER_HOSTNAME=$(kubectl get ingress -n router -o jsonpath="{.*.*.status.loadBalancer.ingress.*.hostname}")
open http://$ROUTER_HOSTNAME
Upon running the above commands, you'll have the Router page open and you can make requests against your newly deployed supergraph!
Note: If using Explorer to run operations, you will need to set the client headers first:
apollographql-client-name:apollo-client
apollographql-client-version:b
The last step to getting fully configured is to deploy the client to both environments. To do so, we'll need our router ingress URL to point the client to. This can be pulled from the prior commands, so if you are using the same terminal session, feel free to skip the next set of commands.
kubectx apollo-supergraph-k8s-prod
ROUTER_HOSTNAME=http://$(kubectl get ingress -n router -o jsonpath="{.*.*.status.loadBalancer.ingress.*.ip}")
Upon running the above commands, you'll have the Router page open and you can make requests against your newly deployed supergraph!
kubectx apollo-supergraph-k8s-prod
ROUTER_HOSTNAME=$(kubectl get ingress -n router -o jsonpath="{.*.*.status.loadBalancer.ingress.*.hostname}")
Once you have the router hostname, you'll need to set it as a secret in the GitHub repository created.
gh variable set BACKEND_URL --body "$ROUTER_HOSTNAME" --repo $GITHUB_ORG/reference-architecture
Lastly, we'll need to deploy the client:
gh workflow run "Deploy Client" --repo $GITHUB_ORG/reference-architecture \
-f environment=prod \
-f dry-run=false \
-f debug=false
This will create another ingress specific to the client, so much like the router, you can run the following commands depending on your provider. As with the other ingress, this may take a few minutes to become active.
kubectx apollo-supergraph-k8s-prod
ROUTER_IP=$(kubectl get ingress -n client -o jsonpath="{.*.*.status.loadBalancer.ingress.*.ip}")
open http://$ROUTER_IP
You should now have the full architecture deployed!
kubectx apollo-supergraph-k8s-prod
ROUTER_HOSTNAME=$(kubectl get ingress -n client -o jsonpath="{.*.*.status.loadBalancer.ingress.*.hostname}")
open http://$ROUTER_HOSTNAME
You should now have the full architecture deployed!