Skip to content

Commit

Permalink
Docs improvements (#10)
Browse files Browse the repository at this point in the history
* Docs improvements

* Add sidecar version
  • Loading branch information
wcmjunior authored Aug 13, 2024
1 parent 261d740 commit 5856356
Show file tree
Hide file tree
Showing 12 changed files with 789 additions and 330 deletions.
264 changes: 151 additions & 113 deletions README.md

Large diffs are not rendered by default.

111 changes: 111 additions & 0 deletions docs/certificates.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,111 @@
# Configuring certificates for Helm sidecars

You can use Cyral's default [sidecar-created
certificate](https://cyral.com/docs/sidecars/certificates/overview#sidecar-created-certificate) or use a
[custom certificate](https://cyral.com/docs/sidecars/certificates/overview#custom-certificate) to secure
the communications performed by the sidecar.

In this page we provide two ways of deploying a custom certificate to
your `helm` sidecar:

- using `cert-manager` to provision the certificate automatically on your cluster; or
- provisioning a certificate signed by the Certificate Authority of your choice.

The first approach creates a stack for certificate management based on
a set of certificate signing and validation methods. The second approach
creates a `kubernetes` secret containing the information from the
provisioned certificate.

## `cert-manager` provisioned certificate

This set of instructions makes use of [`cert-manager`](https://cert-manager.io/docs/), an extension to `kubernetes`
that uses CRDs to easily manage certificates from different sources.

### Prerequisites

1. Have a [Kubernetes cluster](https://kubernetes.io/docs/concepts/workloads/controllers/deployment/#creating-a-deployment) deployed.
2. [Install Helm 3](https://helm.sh/docs/intro/install/).
3. Have [RBAC](https://kubernetes.io/docs/reference/access-authn-authz/rbac/) permissions to install CRDs.

### Installing cert-manager

`cert-manager` installation is well documented in [their documentation](https://cert-manager.io/docs/installation/). We recommend
installing it using `helm`.

To install the latest version of `cert-manager`, run the following command:
```bash
helm upgrade -i cert-manager cert-manager -n cert-manager --repo https://charts.jetstack.io --create-namespace --set installCRDs=true
```

### Creating an issuer

An `Issuer` is a `cert-manager` resource that configures how your certificate will be validated. The issuer's configuration will vary
with your cloud provider and validation method. Refer to the [project documentation](https://cert-manager.io/docs/configuration/) to create an issuer.


### Creating the certificate

After creating an issuer, you need to create a `Certificate` resource so that `cert-manager` starts the validation process for your domain using the
configuration created in the `Issuer` from the last step. The certificate should look something like this:

```yaml
apiVersion: cert-manager.io/v1
kind: Certificate
metadata:
name: acme-crt
namespace: <sidecar namespace>
spec:
secretName: <certificate secret name>
dnsNames:
- my-sidecar.my-domain.com
issuerRef:
name: <your issuer name>
# We can reference ClusterIssuers by changing the kind here.
# The default value is Issuer (i.e. a locally namespaced Issuer)
kind: Issuer
group: cert-manager.io
```
This will trigger a chain that will eventually create a `tls` secret with the name `<certificate secret name>` on the `<sidecar namespace>` namespace.

**The secret name must be provided to the sidecar Helm chart. See [how to do
it here](#provide-custom-certificate-to-the-sidecar).**

**WARNING:** By default, the sidecar contains permissions to `get` and `watch` `v1/Secret` resources in the namespace
it's created in. If you are using a custom `ServiceAccount`, make sure it has these permissions attached to it.

## Provide custom certificate to the sidecar

To provide a custom certificate to the sidecar, first create a secret then provide the
secret name in the values file of the Helm chart.

The `helm` sidecar makes use of [tls secrets](https://kubernetes.io/docs/concepts/configuration/secret/#tls-secrets) to load
custom certificates.

You can create the secret from a PEM encoded certificate file and a key file using the following command:
```bash
kubectl create secret tls my-tls-secret \
--cert=path/to/cert/file \
--key=path/to/key/file \
--namespace <sidecar namespace>
```

To make the sidecar use your custom certificate, provide the name of the secret
to the sidecar Helm chart.

Suppose you created the secrets `my-tls-secret` and `my-ca-secret`, then
provide the following to your values file:

```yaml
cyral:
sidecar:
certificates:
tls:
existingSecret: "my-tls-secret"
ca:
existingSecret: "my-ca-secret"
```

The choice between providing a `tls`, a `ca` secret or *both* will depend on the repositories
used by your sidecar. See the certificate type used by each repository in the
[sidecar certificates](https://cyral.com/docs/sidecars/deployment/certificates#sidecar-certificate-types) page.
84 changes: 84 additions & 0 deletions docs/metrics.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,84 @@
# Reading metrics from Helm sidecars

**NOTE:** You can look at all the metrics definitions and what they mean on our [metrics reference page](https://cyral.com/docs/sidecars/monitoring/metrics)

Metric collection on the `helm` sidecar can be configured in numerous ways, depending
on the `Prometheus` configuration for your `Kubernetes` cluster. You can
set the metrics port by adding the following to your `values.yaml` file:

```yaml
containerPorts:
metrics: 9000 # this is the default value
```
By default, this port will not be exposed on the `Service` object created by the `helm` chart.
To enable its exposure, you can add the following to your `values.yaml` file:

```yaml
service:
ports:
metrics: 9000
targetPort:
metrics: metrics
```

## Prometheus configuration

### Service Monitor discovery configuration

The sidecar `helm` chart packages a `ServiceMonitor` object which can be used
in conjunction with a [`prometheus operator`](https://github.com/prometheus-operator/prometheus-operator) to
monitor all pods in the sidecar's `Deployment`. To enable the service monitor, you
can add the following to your `values.yaml` file:

```yaml
metrics:
serviceMonitor:
enabled: true
```

**NOTE:** There are many other configuration options for the `ServiceMonitor` object,
you can look at the default `values.yaml` file to know all the options.

### Annotation based Prometheus discovery configuration

You can add common `Prometheus` annotations by adding the following
to your `values.yaml` file:

```yaml
podAnnotations:
"prometheus.io/scrape": "true"
"prometheus.io/port": "9000"
```

**NOTE:** You can look at configuring `Prometheus` service discovery for `Kubernetes`
on [Prometheus' documentation](https://prometheus.io/docs/prometheus/latest/configuration/configuration/#kubernetes_sd_config)

## Datadog configuration

Datadog metric scraping on Kubernetes can be done in several different ways, and
refer to [their documentation](https://docs.datadoghq.com/containers/kubernetes/prometheus/?tab=kubernetesadv2) for
a more in depth explanation on Datadog metrics collection on Kubernetes.

Metrics are exposed through the `metrics-aggregator` container, on the `metrics.port` port, and are on `OpenMetrics` format,
so a sample annotation you can create by changing your `values.yaml` file is the following:

```yaml
podAnnotations:
ad.datadoghq.com/metrics-aggregator.checks: |
{
"openmetrics": {
"init_config": {},
"instances": [
{
"openmetrics_endpoint": "http://%%host%%:9000/metrics ",
"namespace": "cyral",
"metrics": ["cyral*", "up"]
}
]
}
}
```

This example would expose any metrics starting with `cyral` and the `up` metric
to Datadog, on the `cyral` namespace.
63 changes: 63 additions & 0 deletions docs/node-scheduling.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,63 @@
# Scheduling nodes for a Helm sidecar

There are many ways to specify to which nodes your sidecar should and should
not be scheduled to.

## Node Selectors

In the `cyral-sidecar` chart, use the variable `nodeSelector` to force
your sidecar pods to run on a specific set of Kubernetes cluster
nodes. The syntax uses a label-value pair to specify the nodes:

```yaml
nodeSelector:
SOME_LABEL: SOME_VALUE
```
Learn more about [node selectors](https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#nodeselector).
## Node Affinity
To set the node affinity for the pods, use the variable `affinity`. This will let you use
a very expressive language to define affinities and anti affinities for each pod on the deployment.

```yaml
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/e2e-az-name
operator: In
values:
- e2e-az1
- e2e-az2
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 1
preference:
matchExpressions:
- key: another-node-label-key
operator: In
values:
- another-node-label-value
```

**TIP**: You can configure presets for pod anti-affinity and pod affinity using the
`podAntiAffinityPreset` and `podAffinityPreset` keys in the [values file](./values-file.md#deployment-configuration).

Learn more about [affinity and anti affinity](https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#affinity-and-anti-affinity).

## Pod tolerations

You can set tolerations for your pod, so that it doesn't get scheduled to a tainted
node. To set the tolerations use the variable `tolerations`.

```yaml
tolerations:
- key: "key1"
operator: "Equal"
value: "value1"
effect: "NoSchedule"
```

Learn more about [taints and tolerations](https://kubernetes.io/docs/concepts/scheduling-eviction/taint-and-toleration/).
64 changes: 64 additions & 0 deletions docs/port-configuration.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,64 @@
# Restricting ports where users connect to repositories

A single Cyral sidecar cluster usually monitors and protects many
repositories of different types. To make it easy for data users to
connect to these repositories using the port numbers they're
accustomed to, the sidecar cluster exposes multiple ports.
You can restrict or increase the set of exposed ports by changing
the exposed ports in the `values.yaml` file.

## Declaring container ports

On the `values.yaml` file, you can find different parameters related
to ports exposure. The `containerPorts` object specifies which ports
the container will listen on. It has a map of `<port-name>: <port-number>`,
where `<port-name>` is an arbitrary name for the port and `<port-number>`
is an integer to the TCP port. These are the same port numbers used to bind
data repositories on the Control Plane.

```yaml
containerPorts:
mysql: 3306
pg: 5432
mongodb0: 27017
mongodb1: 27018
mongodb2: 27019
```
The above example declares some port names (`mysql`, `pg`, `mongodb0`, `mongodb1`,
and `mongodb2`) and their corresponding port numbers. We can refer to these port
names later on to expose them through a Kubernetes service.

## Exposing container ports

To expose container ports to external traffic or to other pods within the cluster, you need to set
service ports. The `service` object defines `ports` and `targetPorts`. The `ports` property specifies
the ports the Service will expose, while `targetPort` maps the Service ports to the container's
`containerPorts` declared previously.

In `service.ports`, you define a map of `<port-name>: <port-number>` where the Kubernetes service
will listen on. Then, you can use `service.targetPorts` to map service ports to container ports
in the format `<service-port-name>: <container-port-name>`. For instance, assuming you defined a
container port as `mysql: 3306` and a service port as `mysql: 3306`, you can set `mysql: mysql`
in `targetPorts` to create a link between them.

Following is an example of how to set service ports.

```yaml
service:
...
ports:
mysql: 3306
pg: 5432
mongodb0: 27017
mongodb1: 27018
mongodb2: 27019
targetPort:
mysql: mysql
pg: pg
mongodb0: mongodb0
mongodb1: mongodb1
mongodb2: mongodb2
```

The above example expose ports `3306`, `5432`, `27017`, `27018`, and `27019` on the service.
59 changes: 59 additions & 0 deletions docs/pre-existing-sa.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,59 @@
# Using a pre-existing service account for a Helm sidecar

When the sidecar is deployed, it creates a role to access some of the
Kubernetes APIs. You may wish to retain control over roles and role
binding, and you may wish to do this outside of your Helm charts. You
can accomplish this by deploying the sidecar using external service
accounts. Below, we explain how to do this.

1. **Create the service account**: You can deploy the Cyral sidecar using
a Kubernetes service account that you create and manage. Here,
we'll create a service account. In this command,
- `SIDECAR_NAMESPACE` is the Kubernetes namespace where the Cyral
sidecar cluster will run
- `SIDECAR_SA` is the Kubernetes service account that will deploy and
run the Cyral sidecar cluster

```
kubectl create sa -n $SIDECAR_NAMESPACE $SIDECAR_SA
```

2. **Create the roles and role bindings**: The Cyral sidecar
requires 3 separate roles:
- a role for the *sidecar exporter* (service that sends sidecar health
metrics to the Cyral management console),
- a role for the sidecar’s *log shipper* that sends
log data to services such as ELK, and
- a role for *accessing the Kubernetes secret* with credentials
needed to access the Cyral control plane.

Follow the examples below, replacing the names in angle brackets
with names suitable for your environment:

```
kubectl create role <ROLE FOR EXPORTER> --verb=get --resource=services -n $SIDECAR_NAMESPACE
kubectl create role <ROLE FOR LOG SHIPPER> --verb=get,watch,list --resource=pods -n $SIDECAR_NAMESPACE
kubectl create role <ROLE FOR SECRETS> --verb=get,watch,patch --resource=secrets -n $SIDECAR_NAMESPACE
```

Bind the roles to the service account:

```
kubectl create rolebinding <BINDING FOR EXPORTER> --role=<ROLE FOR EXPORTER> --serviceaccount=$SIDECAR_NAMESPACE:$SIDECAR_SA --namespace $SIDECAR_NAMESPACE
kubectl create rolebinding <BINDING FOR LOG SHIPPER> --role=<ROLE FOR LOG SHIPPER> --serviceaccount=$SIDECAR_NAMESPACE:$SIDECAR_SA --namespace $SIDECAR_NAMESPACE
kubectl create rolebinding <BINDING FOR SECRETS ROLE> --role=<ROLE FOR SECRETS> --serviceaccount=$SIDECAR_NAMESPACE:$SIDECAR_SA --namespace $SIDECAR_NAMESPACE
```

3. **Modify the values.yaml file**: The downloaded `values.yaml` files
need to be modified to use the above-created service account.
Note that `serviceAccount.create` must be set to false:

```yaml
serviceAccount:
name: $SIDECAR_SA
create: false
```
Loading

0 comments on commit 5856356

Please sign in to comment.