This repository will guide you through the preparation process of building an EKS cluster using the AWS CloudShell that will be used in Tigera's Calico Cloud workshop. The goal is to reduce the time used for setting up infrastructure during the workshop, optimizing the Calico Cloud learning and ensuring everyone has the same experience.
The following are the basic requirements to start the workshop.
- AWS Account AWS Console
- AWS CloudShell https://portal.aws.amazon.com/cloudshell
- Amazon EKS Cluster - to be created here!
-
Login to AWS Portal at https://portal.aws.amazon.com.
-
Open the AWS CloudShell.
-
Install the bash-completion on the AWS CloudShell.
sudo yum -y install bash-completion
-
Configure the kubectl autocomplete.
source <(kubectl completion bash) # set up autocomplete in bash into the current shell, bash-completion package should be installed first. echo "source <(kubectl completion bash)" >> ~/.bashrc # add autocomplete permanently to your bash shell.
You can also use a shorthand alias for kubectl that also works with completion:
alias k=kubectl complete -o default -F __start_kubectl k echo "alias k=kubectl" >> ~/.bashrc echo "complete -o default -F __start_kubectl k" >> ~/.bashrc /bin/bash
-
Install the eksctl - Installation instructions
mkdir ~/.local/bin curl --silent --location "https://github.com/weaveworks/eksctl/releases/latest/download/eksctl_$(uname -s)_amd64.tar.gz" | tar xz -C /tmp sudo mv /tmp/eksctl ~/.local/bin && eksctl version
-
Install the K9S, if you like it.
curl --silent --location "https://github.com/derailed/k9s/releases/download/v0.32.5/k9s_Linux_amd64.tar.gz" | tar xz -C /tmp sudo mv /tmp/k9s ~/.local/bin && k9s version
-
Define the environment variables to be used by the resources definition.
NOTE: In this section, we'll create some environment variables. If your terminal session restarts, you may need to reset these variables. You can do that using the following command:
source ~/workshopvars.env
# Feel free to use the cluster name and the region that better suits you. export CLUSTERNAME=tigera-workshop export REGION=us-west-2 # Persist for later sessions in case of disconnection. echo "# Start Lab Params" > ~/workshopvars.env echo export CLUSTERNAME=$CLUSTERNAME >> ~/workshopvars.env echo export REGION=$REGION >> ~/workshopvars.env
-
Create the EKS cluster.
eksctl create cluster \ --name $CLUSTERNAME \ --version 1.29 \ --region $REGION \ --node-type m5.xlarge
-
Verify your cluster status. The
status
should beACTIVE
.aws eks describe-cluster \ --name $CLUSTERNAME \ --region $REGION \ --no-cli-pager \ --output yaml
-
Verify you have API access to your new EKS cluster
kubectl get nodes
The output will be something similar to:
NAME STATUS ROLES AGE VERSION ip-192-168-30-52.us-west-2.compute.internal Ready 7m6s v1.29.6-eks-1552ad0 ip-192-168-38-242.us-west-2.compute.internal Ready 7m7s v1.29.6-eks-1552ad0
To see more details about your cluster:
kubectl cluster-info
The output will be something similar to:
Kubernetes control plane is running at https://E306AAC3433C85AC39A376C39354E640.gr7.us-west-2.eks.amazonaws.com CoreDNS is running at https://E306AAC3433C85AC39A376C39354E640.gr7.us-west-2.eks.amazonaws.com/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.You should now have a Kubernetes cluster running with 2 nodes. You do not see the master servers for the cluster because these are managed by AWS. The Control Plane services which manage the Kubernetes cluster such as scheduling, API access, configuration data store and object controllers are all provided as services to the nodes.
-
Save the nodegroup name in an environment variable:
export NGNAME=$(eksctl get nodegroups --cluster $CLUSTERNAME --region $REGION | grep $CLUSTERNAME | awk -F ' ' '{print $2}') && \ echo export NGNAME=$NGNAME >> ~/workshopvars.env
-
Scale the nodegroup down to 0 nodes, to reduce the cost.
eksctl scale nodegroup $NGNAME \ --cluster $CLUSTERNAME \ --region $REGION \ --nodes 0 \ --nodes-max 1 \ --nodes-min 0
-
It will take a minute or two until all nodes are deleted. You can monitor the process using the following command:
watch kubectl get nodes
When there are no more worker nodes in your EKS cluster, you should see:
Every 2.0s: kubectl get nodes No resources found
-
Connect back to your AWS CloudShell and load the environment variables:
source ~/workshopvars.env
-
Scale up the nodegroup back to 2 nodes:
eksctl scale nodegroup $NGNAME \ --cluster $CLUSTERNAME \ --region $REGION \ --nodes 2 \ --nodes-max 2 \ --nodes-min 2
-
It will take a few minutes until the nodes are back in
Ready
status. You can monitor it with the following command:watch kubectl get nodes
Wait until the output becomes:
Every 2.0s: kubectl get nodes NAME STATUS ROLES AGE VERSION ip-192-168-30-52.us-west-2.compute.internal Ready 8m59s v1.29.6-eks-1552ad0 ip-192-168-38-242.us-west-2.compute.internal Ready 9m v1.29.6-eks-1552ad0