GitOps is an increasingly popular set of practices for managing the complexities of running hybrid multicluster Kubernetes infrastructure. GitOps centers on treating Git repositories as the single source of truth and applying Git workflows that have been consistently used for application development to infrastructure and application operators.
This repository provides a starting point to deploy the Openshift GitOps operator as well as the first ArgoCD application. In the annex, you can also see how to deploy a Sonatype Nexus Repository to store your CI/CD artifacts such as .jar
, helm charts, etc.
Red Hat OpenShift GitOps Red Hat OpenShift GitOps uses Argo CD to manage specific cluster-scoped resources, including platform operators, optional Operator Lifecycle Manager (OLM) operators, and user management. Argo CD is a popular Cloud Native Computing Foundation (CNCF) open-source GitOps Kubernetes Operator for declarative configuration on Kubernetes clusters.
First, let’s install the OpenShift GitOps Operator on OCP and log in to the Argo CD instance.
oc process -f openshift/01-operator.yaml | oc apply -f -
ℹ️
|
After the installation is completed, the operator pods will be running in the openshift-operators project.
|
By default, the GitOps operator deploys an ArgoCD instance in the openshift-gitops
project. To avoid that and have full control of the installation location, the following configuration has been set in the previous OCP template.
spec:
config:
env:
- name: DISABLE_DEFAULT_ARGOCD_INSTANCE
value: "true"
- name: DISABLE_DEFAULT_ARGOCD_CONSOLELINK
value: "true"
Second, we are going to deploy our ArgoCD instance manually using the following template:
oc process -f openshift/02-argocd.yaml | oc apply -f -
This command will create a namespace gitops
, a default Application Project and an ArgoCD cluster. The configuration used is the default Operator configuration with some small improvements: Delegating authentication on the OpenShift integrated login, and implementing basic RBAC policies.
Access the installed ArgoCD cluster using the following route:
oc get routes argocd-cluster-server -n gitops --template="https://{{.spec.host}}"
ℹ️
|
After the Red Hat OpenShift GitOps Operator is installed, Argo CD automatically creates a user with admin permissions. However, we have disabled Admin access in the ArgoCD RBAC using the property .spec.disableAdmin .
|
Dex is installed by default for all the Argo CD instances created by the Operator. You can configure Red Hat OpenShift GitOps to use Dex as the SSO authentication provider by setting the .spec.sso parameter. This is the current configuration:
spec:
sso:
dex:
openShiftOAuth: true
provider: dex
Dex uses the users and groups defined within the OpenShift Container Platform by checking the OAuth server provided by the platform. Use the following configuration so that every OCP cluster-admin
is an ArgoCD admin:
spec:
rbac:
defaultPolicy: 'role:admin'
policy: |
g, system:cluster-admins, role:admin
g, cluster-admins, role:admin
scopes: '[groups]'
🔥
|
Authorization in ArgoCD is a combination of configuring permissions at different levels. Here we present all three levels that you have to take care of to set proper authorization configuration. Please, read the three following sections carefully. |
Openshift GitOps supports several ArgoCD clusters in the same OCP. This feature is essential for organizations that want to implement full multi-tenancy at the ArgoCD cluster level. With this feature available, now we have to distribute each namespace among the ArgoCD clusters. This is done using the argocd.argoproj.io/managed-by
label.
Add a label to the application’s namespace so that the Argo CD instance in the gitops
namespace can manage it:
oc label namespace spring-petclinic argocd.argoproj.io/managed-by=gitops
If you don’t do so, the error message that you will see in the web console when you try to synchronize an application is:
Namespace "<namespace>" for <resource> "<resource-name>" is not managed.
Link to the documentation.
Cluster resources are not bound to a namespace, and, therefore, are not affected by the previous label. For that reason, non-default ArgoCD instances cannot control them. If you want to do so, you need to instruct the GitOps operator to allow it for your cluster like in the following example:
spec:
config:
env:
- name: ARGOCD_CLUSTER_CONFIG_NAMESPACES
value: openshift-gitops, gitops
The ArgoCD instance only has privileges in its namespace which is gitops
. For creating/updating/listing resources in other namespaces, it’s mandatory to update the RBAC for its Service Account.
This section can be as complex as the security requirements that your organization demands for the ArgoCD deployment. The easiest solution for non-productive environments would be to grant cluster-admin
rights to the service account that interacts with the k8s API.
oc adm policy add-cluster-role-to-user admin system:serviceaccount:gitops:argocd-cluster-argocd-application-controller
If you prefer to have a per-project tunning, you can use the configuration set in the template openshift/11-application-app.yaml
, where we provide project admin rights to the SA. This is also oriented to get a proper multi-tenancy configuration, like in the previous section. Check the template mentioned or use the following command:
oc adm policy add-role-to-user admin system:serviceaccount:gitops:argocd-cluster-argocd-application-controller -n spring-petclinic
Obviously, you can even set a finer tunning by creating a custom Role
and RoleBinding
to specify the resources that each ArgoCD will be allowed to manage per namespace. This KCS gives you an example of how to configure one of these RoleBindings
.
Extra documentation:
-
Deep-dive blog post about namespace isolation using the SA
RoleBindings
. -
Upstream issue regarding permissions for the ArgoCD instance.
Create an Application resource using the following template:
oc process -f openshift/10-application-infra.yaml | oc apply -f -
❗
|
TL;DR: Execute the following script to auto-install a Nexus instance in your cluster: ./auto-install-nexus.sh |
Nexus Repository OSS is an open-source repository that supports many artifact formats, including Docker, Java™, and npm. With the Nexus tool integration, pipelines in your toolchain can publish and retrieve versioned apps and their dependencies by using central repositories that are accessible from other environments.
If you are planning to deploy your applications using Helm charts, most of the architectures you will need a Helm repository to host packaged Helm charts. Install a Nexus repository manager using the following commands:
# Define common variables
OPERATOR_NAMESPACE="nexus"
# Deploy operator
oc process -f openshift/nexus/01-operator.yaml \
-p OPERATOR_NAMESPACE=$OPERATOR_NAMESPACE | oc apply -f -
# Deploy application instance
oc process -f openshift/nexus/02-server.yaml \
-p OPERATOR_NAMESPACE=$OPERATOR_NAMESPACE \
-p SERVER_NAME="nexus-server" | oc apply -f -
Create a Helm repository with the following steps:
-
Access the Nexus route:
oc get routes nexus-server --template="https://{{.spec.host}}"
. -
Log in using the admin credentials:
admin
/admin123
. -
Server Administration > Repositories > Create Repositories > "Helm(hosted)"
-
name:
helm-charts
. -
DeploymentPolicy:
Allow redeploy
.
-
-
Click on
Create repository
.
ℹ️
|
If you don’t want to use the console, you can use the curl command to create this repository. Check an example in the auto-install-nexus.sh script.
|
OpenShift GitOps is shipped inclusive as part of the OpenShift Container Platform subscription and supported per the Red Hat production terms of support.
-
For the supported versions of GitOps on OCP, check the Red Hat OpenShift Container Platform Life Cycle Policy.
-
For the versions of the upstream components, check the Compatibility and support matrix.
-
For the Tech Preview components, check the Technology Preview features section.
For more information check the Openshift GitOps general Release Notes.