Skip to content

Integrate Cilium with Hybridnet

Huanyu He edited this page Mar 21, 2022 · 17 revisions

Hybridnet does not implement Kubernetes service and policy itself. Normally, we cannot just deploy hybridnet without kube-proxy. But except kube-proxy, there are many other choices for Kubernetes service implementation.

Cilium is an ebpf-based networking and security system. It provides a more efficient service/policy implementation and lots of advanced operating metrics. Here we provide an experimental method to deploy a Kubernetes cluster along with hybridnet and cilium (without kube-proxy).

The integration solution is based on cilium's Generic Veth Chaining.

Environment information

Cilium has a specific description for system requirements. In this experiment, we use such a environment:

  • Two ECS machines on aliyun.com, one master and one worker.
  • Centos 8 OS for kernel version of 4.18.0-348.2.1.el8_5.x86_64
  • Cilium version: 1.10.5
  • Kubernetes version: 1.20.13
  • Helm version: 3.7.1

Deploy a cluster without Kube-proxy

Initialize the control-plane node via kubeadm init and skip the installation of the kube-proxy add-on: kubeadm init --kubernetes-version=v1.20.13 --skip-phases=addon/kube-proxy

Deploy Cilium for Kubernetes service

Create a chaining.yaml ConfigMap based on the following template:

apiVersion: v1
kind: ConfigMap
metadata:
  name: cni-configuration
  namespace: kube-system
data:
  cni-config: |-
    {
      "name": "generic-veth",
      "cniVersion": "0.3.1",
      "plugins": [
        {
          "type":"hybridnet",
          "server_socket":"/run/cni/hybridnet.sock"
        },
        {
          "type": "portmap",
          "snat": true,
          "capabilities": {"portMappings": true}
        },
        {
          "type": "cilium-cni"
        }
      ]
    }

Install cilium 1.10.5:

helm repo add cilium https://helm.cilium.io/

helm install cilium cilium/cilium --version 1.10.5 \
    --namespace kube-system \
    --set cni.chainingMode=generic-veth \
    --set cni.customConf=true \
    --set cni.configMap=cni-configuration \
    --set tunnel=disabled \
    --set enableIPv4Masquerade=false \
    --set kubeProxyReplacement=strict \
    --set k8sServiceHost=<api-server-ip> \
    --set k8sServicePort=<api-server-port> \
    --set bpf.hostRouting=true 

Deploy Hybridnet

Deploy hybridnet with an overlay Network according to Getting Started.

For every node of cluster, remove the hybridnet configuration file: rm /etc/cni/net.d/00-hybridnet.conflist

The coredns-xxx pods need to be deleted for restarting after configuration file on every node is removed.

Connectivity test

Reference to cilium connectivity tests doc, we test if cilium and hybridnet are working correctly using follow commands:

kubectl create ns cilium-test
wget https://raw.githubusercontent.com/cilium/cilium/v1.10/examples/kubernetes/connectivity-check/connectivity-check.yaml
sed -i 's/google/alibaba/g' ./connectivity-check.yaml
kubectl apply --namespace=cilium-test -f ./connectivity-check.yaml

Because the access to "google.com" is forbidden on ECS machines in the experimental environment, we change the "google.com" to "alibaba.com" in the test case of "pod-to-external-fqdn-allow". Then we get the results above:

[root@iZf8zcj7vnz8u4jzgg9lgaZ ~]# kubectl get po -n cilium-test -owide
NAME                                                      READY   STATUS    RESTARTS   AGE     IP              NODE                      NOMINATED NODE   READINESS GATES
echo-a-56f9849b6b-bw25q                                   1/1     Running   0          6m16s   10.14.100.93    izf8zcj7vnz8u4jzgg9lgbz   <none>           <none>
echo-b-5898f8c6c4-fddq8                                   1/1     Running   0          6m16s   10.14.100.94    izf8zcj7vnz8u4jzgg9lgbz   <none>           <none>
echo-b-host-7b949c7b9f-ffbr7                              1/1     Running   0          6m16s   172.19.18.245   izf8zcj7vnz8u4jzgg9lgbz   <none>           <none>
host-to-b-multi-node-clusterip-6c4db976b5-jn5d6           1/1     Running   0          6m15s   172.19.18.241   izf8zcj7vnz8u4jzgg9lgaz   <none>           <none>
host-to-b-multi-node-headless-68d54bb4b7-c2wk5            1/1     Running   1          6m15s   172.19.18.241   izf8zcj7vnz8u4jzgg9lgaz   <none>           <none>
pod-to-a-9dc6d768c-knv4t                                  1/1     Running   0          6m16s   10.14.100.95    izf8zcj7vnz8u4jzgg9lgbz   <none>           <none>
pod-to-a-allowed-cnp-5ff69578c5-hxc8j                     1/1     Running   0          6m16s   10.14.100.98    izf8zcj7vnz8u4jzgg9lgbz   <none>           <none>
pod-to-a-denied-cnp-64d7765ddf-z785n                      1/1     Running   0          6m16s   10.14.100.97    izf8zcj7vnz8u4jzgg9lgbz   <none>           <none>
pod-to-b-intra-node-nodeport-5c8cd69ff5-q82nd             1/1     Running   1          6m15s   10.14.100.103   izf8zcj7vnz8u4jzgg9lgbz   <none>           <none>
pod-to-b-multi-node-clusterip-7b5854d46c-7699l            1/1     Running   0          6m16s   10.14.100.100   izf8zcj7vnz8u4jzgg9lgaz   <none>           <none>
pod-to-b-multi-node-headless-77b698d8f5-fnmhn             1/1     Running   1          6m16s   10.14.100.101   izf8zcj7vnz8u4jzgg9lgaz   <none>           <none>
pod-to-b-multi-node-nodeport-84fdc88d9f-hv4dx             1/1     Running   1          6m15s   10.14.100.102   izf8zcj7vnz8u4jzgg9lgaz   <none>           <none>
pod-to-external-1111-6b96d6cc7-9lpl6                      1/1     Running   0          6m16s   10.14.100.96    izf8zcj7vnz8u4jzgg9lgbz   <none>           <none>
pod-to-external-fqdn-allow-alibaba-cnp-67b44d4d8f-2z6dr   1/1     Running   0          6m16s   10.14.100.99    izf8zcj7vnz8u4jzgg9lgaz   <none>           <none>
[root@iZf8zcj7vnz8u4jzgg9lgaZ ~]#
[root@iZf8zcj7vnz8u4jzgg9lgaZ ~]#
[root@iZf8zcj7vnz8u4jzgg9lgaZ ~]# kubectl get node -owide
NAME                      STATUS   ROLES                  AGE    VERSION   INTERNAL-IP     EXTERNAL-IP   OS-IMAGE         KERNEL-VERSION                 CONTAINER-RUNTIME
izf8zcj7vnz8u4jzgg9lgaz   Ready    control-plane,master   6d3h   v1.20.9   172.19.18.241   <none>        CentOS Linux 8   4.18.0-348.2.1.el8_5.x86_64    docker://20.10.11
izf8zcj7vnz8u4jzgg9lgbz   Ready    <none>                 6d3h   v1.20.9   172.19.18.245   <none>        CentOS Linux 8   4.18.0-240.22.1.el8_3.x86_64   docker://20.10.11