Skip to content

This repository will help you to build an EKS cluster using AWS CloudShell for Tigera's Calico Cloud workshop.

Notifications You must be signed in to change notification settings

tigera-solutions/eks-workshop-prep

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

17 Commits
 
 

Repository files navigation

Calico Cloud on EKS - Workshop Environment Preparation

This repository will guide you through the preparation process of building an EKS cluster using the AWS CloudShell that will be used in Tigera's Calico Cloud workshop. The goal is to reduce the time used for setting up infrastructure during the workshop, optimizing the Calico Cloud learning and ensuring everyone has the same experience.

Getting Started with AWS CloudShell

The following are the basic requirements to start the workshop.

Instructions

  1. Login to AWS Portal at https://portal.aws.amazon.com.

  2. Open the AWS CloudShell.

    cloudshell

  3. Install the bash-completion on the AWS CloudShell.

    sudo yum -y install bash-completion
  4. Configure the kubectl autocomplete.

    source <(kubectl completion bash) # set up autocomplete in bash into the current shell, bash-completion package should be installed first.
    echo "source <(kubectl completion bash)" >> ~/.bashrc # add autocomplete permanently to your bash shell.

    You can also use a shorthand alias for kubectl that also works with completion:

    alias k=kubectl
    complete -o default -F __start_kubectl k
    echo "alias k=kubectl"  >> ~/.bashrc
    echo "complete -o default -F __start_kubectl k" >> ~/.bashrc
    /bin/bash
  5. Install the eksctl - Installation instructions

    mkdir ~/.local/bin
    curl --silent --location "https://github.com/weaveworks/eksctl/releases/latest/download/eksctl_$(uname -s)_amd64.tar.gz" | tar xz -C /tmp
    sudo mv /tmp/eksctl ~/.local/bin && eksctl version 
  6. Install the K9S, if you like it.

    curl --silent --location "https://github.com/derailed/k9s/releases/download/v0.32.5/k9s_Linux_amd64.tar.gz" | tar xz -C /tmp
    sudo mv /tmp/k9s ~/.local/bin && k9s version

Create an Amazon EKS Cluster

  1. Define the environment variables to be used by the resources definition.

    NOTE: In this section, we'll create some environment variables. If your terminal session restarts, you may need to reset these variables. You can do that using the following command:

    source ~/workshopvars.env
    # Feel free to use the cluster name and the region that better suits you.
    export CLUSTERNAME=tigera-workshop
    export REGION=us-west-2
    # Persist for later sessions in case of disconnection.
    echo "# Start Lab Params" > ~/workshopvars.env
    echo export CLUSTERNAME=$CLUSTERNAME >> ~/workshopvars.env
    echo export REGION=$REGION >> ~/workshopvars.env
  2. Create the EKS cluster.

    eksctl create cluster \
      --name $CLUSTERNAME \
      --version 1.29 \
      --region $REGION \
      --node-type m5.xlarge
  3. Verify your cluster status. The status should be ACTIVE.

    aws eks describe-cluster \
      --name $CLUSTERNAME \
      --region $REGION \
      --no-cli-pager \
      --output yaml
  4. Verify you have API access to your new EKS cluster

    kubectl get nodes

    The output will be something similar to:

    NAME                                              STATUS   ROLES    AGE   VERSION
    ip-192-168-30-52.us-west-2.compute.internal    Ready       7m6s   v1.29.6-eks-1552ad0
    ip-192-168-38-242.us-west-2.compute.internal   Ready       7m7s   v1.29.6-eks-1552ad0
    

    To see more details about your cluster:

     kubectl cluster-info

    The output will be something similar to:

    Kubernetes control plane is running at https://E306AAC3433C85AC39A376C39354E640.gr7.us-west-2.eks.amazonaws.com
    CoreDNS is running at https://E306AAC3433C85AC39A376C39354E640.gr7.us-west-2.eks.amazonaws.com/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy 
    To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.

    You should now have a Kubernetes cluster running with 2 nodes. You do not see the master servers for the cluster because these are managed by AWS. The Control Plane services which manage the Kubernetes cluster such as scheduling, API access, configuration data store and object controllers are all provided as services to the nodes.

Scale down the nodegroup to 0 nodes until the workshop starts.

  1. Save the nodegroup name in an environment variable:

    export NGNAME=$(eksctl get nodegroups --cluster $CLUSTERNAME --region $REGION | grep $CLUSTERNAME | awk -F ' ' '{print $2}') && \
    echo export NGNAME=$NGNAME >> ~/workshopvars.env
  2. Scale the nodegroup down to 0 nodes, to reduce the cost.

    eksctl scale nodegroup $NGNAME \
      --cluster $CLUSTERNAME \
      --region $REGION \
      --nodes 0 \
      --nodes-max 1 \
      --nodes-min 0
  3. It will take a minute or two until all nodes are deleted. You can monitor the process using the following command:

    watch kubectl get nodes

    When there are no more worker nodes in your EKS cluster, you should see:

    Every 2.0s: kubectl get nodes
    
    No resources found
    

Scale up the nodegroup to 2 nodes before the workshop starts.

  1. Connect back to your AWS CloudShell and load the environment variables:

    source ~/workshopvars.env
  2. Scale up the nodegroup back to 2 nodes:

    eksctl scale nodegroup $NGNAME \
      --cluster $CLUSTERNAME \
      --region $REGION \
      --nodes 2 \
      --nodes-max 2 \
      --nodes-min 2
  3. It will take a few minutes until the nodes are back in Ready status. You can monitor it with the following command:

    watch kubectl get nodes

    Wait until the output becomes:

    Every 2.0s: kubectl get nodes  
    
    NAME                                              STATUS   ROLES    AGE    VERSION
    ip-192-168-30-52.us-west-2.compute.internal    Ready       8m59s   v1.29.6-eks-1552ad0
    ip-192-168-38-242.us-west-2.compute.internal   Ready       9m      v1.29.6-eks-1552ad0
    

You are now ready to start the workshop!

About

This repository will help you to build an EKS cluster using AWS CloudShell for Tigera's Calico Cloud workshop.

Topics

Resources

Stars

Watchers

Forks