The AKS Cluster has been enrolled in GitOps management, wrapping up the infrastructure focus of the AKS secure Baseline reference implementation. Follow the steps below to import the TLS certificate that the Ingress Controller will serve for Application Gateway to connect to your web app.
📖 Fabrikam Drone Delivery procured a CA certificate, a standard one, to be used with the AKS Ingress Controller. This one is not EV, as it will not be user facing.
-
Obtain the Azure Key Vault details and give the current user permissions to import certificates.
📖 Finally, the app team decides to use a wildcard certificate of
*.aks-agic.fabrikam.com
for the ingress controller. They use Azure Key Vault to import and manage the lifecycle of this certificate.export SIGNED_IN_OBJECT_ID=$(az ad signed-in-user show --query 'id' -o tsv) KEYVAULT_NAME=$(az deployment group show --resource-group rg-shipping-dronedelivery-${LOCATION} -n cluster-stamp --query properties.outputs.keyVaultName.value -o tsv) export KEYVAULT_ID=$(az resource show -g rg-shipping-dronedelivery-${LOCATION} -n $KEYVAULT_NAME --resource-type 'Microsoft.KeyVault/vaults' --query id --output tsv) az role assignment create --role 'Key Vault Certificates Officer' --assignee $SIGNED_IN_OBJECT_ID --scope $KEYVAULT_ID
-
Import the AKS Ingress Controller's Wildcard Certificate for
*.aks-agic.fabrikam.com
.⚠️ If you already have access to an appropriate certificate or can procure one from your organization, consider using it for this step. For more information, see import certificate tutorial using Azure Key Vault.⚠️ Do not use the certificate created by this script for actual deployments. The use of self-signed certificates is provided for ease of illustration purposes only. For your cluster, use your organization's requirements for procurement and lifetime management of TLS certificates, even for development purposes.cat k8sic.crt k8sic.key > k8sic.pem az keyvault certificate import -f k8sic.pem -n aks-internal-ingress-controller-tls --vault-name $KEYVAULT_NAME
-
Remove Azure Key Vault import certificates permissions for the current user.
az role assignment delete --role 'Key Vault Certificates Officer' --assignee $SIGNED_IN_OBJECT_ID --scope $KEYVAULT_ID
📖 The app team wants to apply Azure Policy over their cluster like they do other Azure resources. Their pods will be covered using the Azure Policy add-on for AKS. Some of these audits might end up in the denial of a specific Kubernetes API request operation to ensure the pod's specification is in compliance with the organization's security best practices. Moreover data is generated by Azure Policy to assist the app team in the process of assessing the current compliance state of the AKS cluster. The app team is going to assign at the resource group level the Azure Policy for Kubernetes built-in restricted initiative as well as five more built-in individual Azure policies that enforce that pods perform resource requests, define trusted container registries, allow root filesystem access in read-only mode, enforce the usage of internal load balancers, and enforce https-only Kubernetes Ingress objects.
-
Confirm policies are applied to the AKS cluster
kubectl get constrainttemplate
The output should look similar to this:
k8sazureallowedcapabilities 3h48m k8sazureallowedseccomp 3h48m k8sazureallowedusersgroups 3h48m k8sazureblockhostnamespace 3h48m k8sazurecontainerallowedimages 3h48m k8sazurecontainerlimits 3h48m k8sazurecontainernoprivilege 3h48m k8sazurecontainernoprivilegeescalation 3h48m k8sazurehostnetworkingports 3h48m k8sazureingresshttpsonly 3h48m k8sazureloadbalancernopublicips 3h48m k8sazurereadonlyrootfilesystem 3h48m k8sazurevolumetypes 3h48m
📖 The app team wants to ensure that application operators are always reminded to reserve the resource requests and limits for the Fabrikam Drone Delivery Shipping Application in every microservice they need to deploy to the AKS cluster. A well-known native Kubernetes best practice to achieve this is to enforce Resource Quotas at the namespace level. This configuration is beneficial in many aspects. More importantly, the clusters are not going run with unbounded resources, and it starts to depict the app team's strategy to implement Horizontal Pod Autoscaling in the future.
-
Ensure Flux has created the following namespace
# press Ctrl-C once you receive a successful response kubectl get ns backend-dev -w
-
Check the
backend-dev
resource quota are enforced.kubectl get resourcequota -n backend-dev
The output should look similar to this:
NAME AGE REQUEST LIMIT dev 23s pods: 5/5, requests.cpu: 765m/1, requests.memory: 1412Mi/2Gi limits.cpu: 1280m/2, limits.memory: 1792Mi/5G
💡 The resource quota is tied to the app and what is considered acceptable performance for your solution as well as the agent node SKU, costs, and so on. Therefore, the app performance team is the one in charge of determining how much it should be reserved per namespace. Sometimes in a production cluster or special namespaces, it is possible to consider the idea of going unbounded, so it can take all remaining resources if needed.