Skip to content

Commit

Permalink
Added steps for cluster move operation and added a reference to the n…
Browse files Browse the repository at this point in the history
…ew capi in-cluster-ippool solution (#98)

Added steps for cluster move operation and added a reference to the new
capi in-cluster-ippool solution.
  • Loading branch information
tillt authored Nov 10, 2023
2 parents 1d10d9d + 2f1335b commit 6d4b3b3
Showing 1 changed file with 76 additions and 1 deletion.
Original file line number Diff line number Diff line change
Expand Up @@ -7,6 +7,10 @@ excerpt: Learn how to predefine a range of IP Addresses for nodes provisioned vi
feature_image: mail.png
---

> Note:
> - DKP 2.6.0+ bumps CAPV to 1.6.0+ which supports [CAPI In-Cluster IPAM](https://github.com/kubernetes-sigs/cluster-api-ipam-provider-in-cluster). An example is documented [here](https://github.com/arbhoj/CAPI-In-Cluster-IPAM). New implementations should use the new solution and existing implementations that are using the solution in this blog post should migrate to this by starting with an `InClusterIPPool` with unallocated addresses and slowly adding addresses back as they are freed up when doing upgrades.
> - If using this solution in a scenario where the CAPI resources were moved (e.g. `Self Managed Cluster`), the IPPool and other Custom resources that keep track of IP Allocations, `don't automatically move` to the cluster where the CAPI resources are being moved to. This could cause undesirable results if someone performs a cluster upgrade and the nodes being upgraded use pre-defined IPs. For this scenario, please follow the steps in the [Steps for Moving to Another Cluster](#steps-for-moving-to-another-cluster) section of this blog.
# Why is a predefined IP Address range required for nodes when building a Kubernetes cluster?

When deploying Kubernetes on [vSphere](https://en.wikipedia.org/wiki/VMware_vSphere) or another on-prem environment, it is often desirable to use an external load balancer (like [F5](https://www.f5.com/) ) to provide a virtual ip (vip) for kubeapi server High Availability. Especially for mission critical environments where an internal load balancer (For example: Kubernetes hosted solutions like [keepalived](https://keepalived.readthedocs.io/en/latest/introduction.html) or [kubevip](https://github.com/kube-vip/kube-vip) ) may not provide the required fault tolerance or throughput. Also, these solution use ARP protocol, and thus require the VIP to be in the same subnet as the nodes, which may not always be feasible.
Expand Down Expand Up @@ -247,4 +251,75 @@ export KUBECONFIG=$(pwd)/${CLUSTER_NAME}.conf #Point kubectl to the downloaded k
kubectl get nodes -o wide #Print the list of nodes along with their IPs
```

Well. That's it. Hope you found this useful in setting up a vsphere environment.
## Steps for Moving to Another Cluster
ClusterAPI is designed to work in a “Management Cluster > Workload Cluster” topology, which works well with DKP Enterprise. The solution discussed in this blog works well for that classic scenario.

However, if using DKP Essentials with a Self Managed Cluster (or other scenarios where the CAPI resources were moved after the cluster build), the following extra steps need to be performed after moving CAPI resources, as the “IPPool.ipam.metal3.io” resource and other custom resources that keep track of allocated/unallocated IP Addresses are not moved automatically when the CAPI cluster resources are pivoted from the bootstrap cluster to the self managed workload cluster using the `dkp move capi-resources` command:

> The **important** point to understand here is that the CAPI controllers only request IPs from the IPPool when provisioning a new VM (i.e. when the VSphereMachine resource is created). Which could be when you are creating, upgrading or scaling the nodes. So, after moving the CAPI resources, start with an IPPool with unused IP addresses and then slowly add them back to the pool as they are released during an upgrade or delete cycle.
> Note: The following example is assuming that there were 6 IP’s (10.16.56.88-10.16.56.93) allocated/kept aside, for the Control Plane (i.e. 3 for initial deploy and 3 for upgrade, scaling, maintenance etc.). So only the first 3 were assigned to the Control Plane VMs (i.e. 10.16.56.88, 10.16.56.89, 10.16.56.90)

Step 1: After the cluster is pivoted to the self managed workload cluster, deploy the IPAM manifests as defined in this [section](#steps-to-deploy-capv-with-ipam) and then create the IPPool resource with IPs that are kept aside for the cluster but not allocated to any VM when building the cluster. To keep things simple, create multiple ranges in the IPPool with a single IP in each range (this is not a hard requirement).

```
kubectl apply -f - <<EOF
apiVersion: ipam.metal3.io/v1alpha1
kind: IPPool
metadata:
name: ${CLUSTER_NAME}-pool
labels:
cluster.x-k8s.io/network-name: ${NETWORK_NAME}
cluster.x-k8s.io/cluster-name: ${CLUSTER_NAME}
spec:
clusterName: ${CLUSTER_NAME}
namePrefix: ${CLUSTER_NAME}-prov
pools:
- start: 10.16.56.91
end: 10.16.56.91
- start: 10.16.56.92
end: 10.16.56.92
- start: 10.16.56.93
end: 10.16.56.93
prefix: 20
gateway: 10.16.48.1
dnsServers: [${DNS_SERVER}]
EOF
```

Step 2: Perform activities like upgrade, updates (like node deletion), scaling etc., and update IPPool with any IP that was released, i.e. if any IP address is freed up as a result of this activity, add it as an individual IP to the list of IP addresses in the given pool. They will be used for the next upgrade/update activity that will require additional static IPs for the control plane.

```
e.g.:
kubectl apply -f - <<EOF
apiVersion: ipam.metal3.io/v1alpha1
kind: IPPool
metadata:
name: ${CLUSTER_NAME}-pool
labels:
cluster.x-k8s.io/network-name: ${NETWORK_NAME}
cluster.x-k8s.io/cluster-name: ${CLUSTER_NAME}
spec:
clusterName: ${CLUSTER_NAME}
namePrefix: ${CLUSTER_NAME}-prov
pools:
- start: 10.16.56.88
end: 10.16.56.88
- start: 10.16.56.89
end: 10.16.56.89
- start: 10.16.56.91
end: 10.16.56.91
- start: 10.16.56.92
end: 10.16.56.92
- start: 10.16.56.93
end: 10.16.56.93
prefix: 20
gateway: 10.16.48.1
dnsServers: [${DNS_SERVER}]
EOF
```

Following this process, over a period of the next upgrade or update cycles, all the IPs that were kept aside for the cluster will once again become a part of this pool.

Well. That's it. Hope you found this useful in setting up a vSphere environment.

0 comments on commit 6d4b3b3

Please sign in to comment.