Skip to content

This repo follows the SDS extension standard of Envoy and implements an external SDS server via more secure solution which is known as Hardware Security Module(HSM). By using this repo, User can maintain the credentials for workloads managed by Istio/Envoy in more secure scenario via external SDS server Besides supporting management for new cre…

License

Notifications You must be signed in to change notification settings

istio-ecosystem/hsm-sds-server

Repository files navigation

HSM-SDS-Server

Introduction

The HSM SDS Server follows the SDS extension standard of Envoy and implements an external SDS server via more secure solution which is known as Hardware Security Module(HSM). By using this repo, User can maintain the credentials for workloads managed by Istio/Envoy in more secure scenario via external SDS server Besides supporting management for new credentials, it also allows user to upload existing credentials and manages them in higher security level. This external SDS server can be used for both gateways and workload sidecars to provide their credential information.

This HSM SDS Server protects service mesh data plane private keys with Intel® SGX. The private keys are stored and used inside the SGX enclave(s) and will never stored in clear anywhere in the system. Authorized applications use the private key in the enclave by key-handle provided by SGX.

Architecture Overview

The SDS Server can protect the private keys via SGX in 2 scenarios: workloads and gateways in Istio/Envoy, showing as above.

Prerequisites

Prerequisites for using Istio mTLS private key protection with SGX:

Getting started

This section covers how to install Istio mTLS and gateway private keys protection with SGX. We use Cert Manager as default K8s CA in this document. If you want to use TCS for workload remote attestaion, please refer to this Document.

Note: please ensure installed cert manager with flag --feature-gates=ExperimentalCertificateSigningRequestControllers=true. You can use --set featureGates="ExperimentalCertificateSigningRequestControllers=true" when helm install cert-manager

Create signer

$ cat <<EOF > ./istio-cm-issuer.yaml
apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
  name: selfsigned-istio-issuer
spec:
  selfSigned: {}
---
apiVersion: cert-manager.io/v1
kind: Certificate
metadata:
  name: istio-ca
  namespace: cert-manager
spec:
  isCA: true
  commonName: istio-system
  secretName: istio-ca-selfsigned
  issuerRef:
    name: selfsigned-istio-issuer
    kind: ClusterIssuer
    group: cert-manager.io
---
apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
  name: istio-system
spec:
  ca:
    secretName: istio-ca-selfsigned
EOF
$ kubectl apply -f ./istio-cm-issuer.yaml
# Get CA Cert and replace it in ./deployment/istio-configs/istio-hsm-config.yaml
$ kubectl get clusterissuers istio-system -o jsonpath='{.spec.ca.secretName}' | xargs kubectl get secret -n cert-manager -o jsonpath='{.data.ca\.crt}' | base64 -d

Apply quote attestation CRD

$ kubectl apply -f https://github.com/intel/trusted-certificate-issuer/tree/main/deployment/crds

Protect the private keys of workloads with HSM

  1. Install Istio
$ istioctl install -f ./deployment/istio-configs/istio-hsm-config.yaml -y
  1. Verifiy the Istio is ready

By deault, Istio will be installed in the istio-system namespce

# Ensure that the pod is running state
$ kubectl get po -n istio-system
NAME                                    READY   STATUS    RESTARTS   AGE
istio-ingressgateway-6cd77bf4bf-t4cwj   1/1     Running   0          70m
istiod-6cf88b78dc-dthpw                 1/1     Running   0          70m
  1. Create sleep and httpbin deployment:

NOTE: If you want use the sds-custom injection template, you need to set the annotations inject.istio.io/templates for both sidecar and sgx. And the ClusterRole is also required.

$ kubectl apply -f <(istioctl kube-inject -f ./deployment/istio-configs/sleep-hsm.yaml )
$ kubectl apply -f <(istioctl kube-inject -f ./deployment/istio-configs/httpbin-hsm.yaml )

A reminder, if you want to apply other workloads, please make sure to add the correct RBAC rules for its Service Account. For details, please refer to the configuration of ClusterRole in ./deployment/istio-configs/httpbin-hsm.yaml.

  1. Successful deployment looks like this:
$ kubectl get po
NAME                       READY   STATUS    RESTARTS   AGE
httpbin-5f6bf4d4d9-5jxj8   3/3     Running   0          30s
sleep-57bc8d74fc-2lw4n     3/3     Running   0          7s
  1. Test pod resources:
$ kubectl exec "$(kubectl get pod -l app=sleep -o jsonpath={.items..metadata.name})" -c sleep -- curl -v -s http://httpbin.default:8000/headers | grep X-Forwarded-Client-Cert
    "X-Forwarded-Client-Cert": "By=spiffe://cluster.local/ns/default/sa/httpbin;Hash=2875ce095572f8a12b6080213f7789bfb699099b83e8ea2889a2d7b3eb9523e6;Subject=\"CN=SGX based workload,O=Intel(R) Corporation\";URI=spiffe://cluster.local/ns/default/sa/sleep"

The above httpbin and sleep applications have enabled SGX and store the private keys inside SGX enclave, completed the TLS handshake and established a connection with each other.

# Dump the envoy config
$ kubectl exec "$(kubectl get pod -l app=sleep -o jsonpath={.items..metadata.name})" -c istio-proxy -- bash 

$ curl localhost:15000/config_dump > envoy_conf.json

It can be seen from the config file that the private_key_provider configuation has replaced the original private key, and the real private key has been safely stored in the SGX enclave.

Protect the private keys of gateways with HSM

  1. Install Istio

NOTE: for the below command you need to use the istioctl for the docker.io/intel/istioctl:1.19.0-intel.0 since only that contains Istio manifest enhancements for SGX mTLS. You can also customize the intel-istio-sgx-gateway.yaml.

istioctl install -f ./deployment/istio-configs/gateway-istio-hsm.yaml -y

Note: please execute kubectl apply -f deployment/istio-configs/gateway-clusterrole.yaml to make sure that the ingress gateway has enough privilege.

  1. Verifiy the pods are running

By deault, Istio will be installed in the istio-system namespce

# Ensure that the pods are running state
$ kubectl get pod -n istio-system
NAME                                    READY   STATUS    RESTARTS   AGE
istio-ingressgateway-55f8dbb66c-6qx2s   2/2     Running   0          73s
istiod-65db6d8666-jgmf7                 1/1     Running   0          75s
  1. Deploy sample application

Create httpbin deployment with gateway CR:

NOTE: If you want to use the sds-custom injection template, you need to set the annotations inject.istio.io/templates for both sidecar and sgx. And the ClusterRole is also required.

kubectl apply -f <(istioctl kube-inject -f ./deployment/istio-configs/httpbin-hsm.yaml )
kubectl apply -f ./deployment/istio-configs/httpbin-gateway.yaml

A reminder, if you want to apply other workloads, please make sure to add the correct RBAC rules for its Service Account. For details, please refer to the configuration of ClusterRole in ./deployment/istio-configs/httpbin-hsm.yaml.

  1. Successful deployment looks like this:

Verify the httpbin pod:

$ kubectl get pod -n default
NAME                       READY   STATUS    RESTARTS      AGE
httpbin-7fbf9db8f6-qvqn4   3/3     Running      0         2m27s

Verify the gateway CR:

$ kubectl get gateway -n default
NAME              AGE
testuds-gateway   2m52s

Verify the quoteattestation CR:

$ kubectl get quoteattestations.tcs.intel.com -n default
NAME                                                                            AGE
sgxquoteattestation-istio-ingressgateway-55f8dbb66c-6qx2s-httpbin-testsds-com   4m36s

Manually get the quoteattestation name via below command

$ export QA_NAME=<YOUR QUOTEATTESTATION NAME>
  1. Prepare credential information:

We use command line tools to read and write the QuoteAttestation manually. You get the tools, km-attest and km-wrap, provided by the Intel® KMRA project.

NOTE: please use release version 2.2.1

$ mkdir -p $HOME/sgx/gateway
$ export CREDENTIAL=$HOME/sgx/gateway

$ kubectl get quoteattestations.tcs.intel.com -n default $QA_NAME -o jsonpath='{.spec.publicKey}' | base64 -d > $CREDENTIAL/public.key
$ kubectl get quoteattestations.tcs.intel.com -n default $QA_NAME -o jsonpath='{.spec.quote}' | base64 -d > $CREDENTIAL/quote.data
$ km-attest --pubkey $CREDENTIAL/public.key --quote $CREDENTIAL/quote.data

$ openssl req -x509 -sha256 -nodes -days 365 -newkey rsa:2048 -subj '/O=example Inc./CN=example.com' -keyout $CREDENTIAL/example.com.key -out $CREDENTIAL/example.com.crt
$ openssl req -out $CREDENTIAL/httpbin.csr -newkey rsa:2048 -nodes -keyout $CREDENTIAL/httpbin.key -subj "/CN=httpbin.example.com/O=httpbin organization"
$ openssl x509 -req -sha256 -days 365 -CA $CREDENTIAL/example.com.crt -CAkey $CREDENTIAL/example.com.key -set_serial 0 -in $CREDENTIAL/httpbin.csr -out $CREDENTIAL/httpbin.crt

NOTE: Before using km-attest, please configurate /opt/intel/km-wrap/km-wrap.conf according to below content:

{
    "keys": [
        {
            "signer": "tcsclusterissuer.tcs.intel.com/sgx-signer",
            "key_path": "$CREDENTIAL/httpbin.key",
            "cert": "$CREDENTIAL/httpbin.crt"
        }
    ]
}
  1. Update credential quote attestation CR with secret contained wrapped key
$ WRAPPED_KEY=$(km-wrap --signer tcsclusterissuer.tcs.intel.com/sgx-signer --pubkey $CREDENTIAL/public.key --pin "HSMUserPin" --token "HSMSDSServer" --module /usr/local/lib/softhsm/libsofthsm2.so)

$ kubectl create secret generic -n default wrapped-key --from-literal=tls.key=${WRAPPED_KEY} --from-literal=tls.crt=$(base64 -w 0 < $CREDENTIAL/httpbin.crt)

Edit quoteattestations.tcs.intel.com $QA_NAME via commond kubectl edit quoteattestations.tcs.intel.com $QA_NAME -n default and append field secretName: wrapped-key for its spec section.

The above httpbin application has enabled SGX and store the private key inside the SGX enclave, completed the TLS handshakes and established a connection with each other and communicating normally.

  1. Verify the service accessibility
$ export INGRESS_NAME=istio-ingressgateway
$ export INGRESS_NS=istio-system
$ export SECURE_INGRESS_PORT=$(kubectl -n "${INGRESS_NS}" get service "${INGRESS_NAME}" -o jsonpath='{.spec.ports[?(@.name=="https")].nodePort}')
$ export INGRESS_HOST=$(kubectl get po -l istio=ingressgateway -n "${INGRESS_NS}" -o jsonpath='{.items[0].status.hostIP}')

$ curl -v -HHost:httpbin.example.com --resolve "httpbin.example.com:$SECURE_INGRESS_PORT:$INGRESS_HOST" \
  --cacert $CREDENTIAL/example.com.crt "https://httpbin.example.com:$SECURE_INGRESS_PORT/status/418"

It will be okay if got below response:

Cleaning Up

  1. Clean up for workloads:
# uninstall istio
$ istioctl x uninstall --purge -y
# delete workloads
$ kubectl delete -f ./deployment/istio-configs/sleep-hsm.yaml
$ kubectl delete -f ./deployment/istio-configs/httpbin-hsm.yaml
  1. Clean up for gateways:
# uninstall istio
$ istioctl x uninstall --purge -y
# delete workloads
$ kubectl delete -f ./deployment/istio-configs/httpbin-hsm.yaml -n default
$ kubectl delete -f ./deployment/istio-configs/httpbin-gateway.yaml.yaml -n default

About

This repo follows the SDS extension standard of Envoy and implements an external SDS server via more secure solution which is known as Hardware Security Module(HSM). By using this repo, User can maintain the credentials for workloads managed by Istio/Envoy in more secure scenario via external SDS server Besides supporting management for new cre…

Resources

License

Stars

Watchers

Forks

Packages

No packages published