Skip to content

Commit

Permalink
Docs edits.
Browse files Browse the repository at this point in the history
  • Loading branch information
ctauchen committed Nov 4, 2024
1 parent d00df2c commit 8815234
Show file tree
Hide file tree
Showing 8 changed files with 156 additions and 173 deletions.
File renamed without changes.
Original file line number Diff line number Diff line change
Expand Up @@ -98,21 +98,21 @@ Step 1: Install Calico node and fluent-bit log forwarder packages.

- If your non-cluster host or VM has RHEL 8 installed, then run the following command:

```bash
dnf install https://downloads.tigera.io/ee/archives/calico-node-3.20~pre2.0.el8.x86_64.rpm
dnf install https://downloads.tigera.io/ee/archives/calico-fluent-bit-3.1.8.el8.x86_64.rpm
```
```bash
dnf install https://downloads.tigera.io/ee/archives/calico-node-3.20~pre2.0.el8.x86_64.rpm
dnf install https://downloads.tigera.io/ee/archives/calico-fluent-bit-3.1.8.el8.x86_64.rpm
```

- If your non-cluster host or VM has RHEL 9 installed, then run the following command:

```bash
dnf install https://downloads.tigera.io/ee/archives/calico-node-3.20~pre2.0.el9.x86_64.rpm
dnf install https://downloads.tigera.io/ee/archives/calico-fluent-bit-3.1.8.el9.x86_64.rpm
```
```bash
dnf install https://downloads.tigera.io/ee/archives/calico-node-3.20~pre2.0.el9.x86_64.rpm
dnf install https://downloads.tigera.io/ee/archives/calico-fluent-bit-3.1.8.el9.x86_64.rpm
```

Only Red Hat Enterprise Linux 8 and 9 x86-64 operating systems are supported in this version of {{prodname}}.

Step 2: Copy the kubeconfig created in Cluster setup step 3 to host folder `/etc/calico/kubeconfig` and change ownership to `calico:calico`.
Step 2: Copy the kubeconfig created in cluster setup step 3 to host folder `/etc/calico/kubeconfig` and change ownership to `calico:calico`.

Step 3: Start Calico node and log forwarder.

Expand Down

This file was deleted.

Original file line number Diff line number Diff line change
@@ -0,0 +1,143 @@
---
description: Install Calico on non-cluster hosts and VMs
---

# Install Calico on non-cluster hosts and VMs

## Big picture

Secure hosts and virtual machines (VMs) running outside of Kubernetes by installing {{prodname}}.

## Value

Calico Enterprise can also be used to protect hosts and VMs running outside of a Kubernetes cluster. In many cases, these are applications and workloads that cannot be easily containerized. {{prodname}} lets you protect and gain visibility into these **non-cluster hosts** and use the same robust Calico network policy that you use for pods.

## Concepts

### Non-cluster hosts and host endpoints

A non-cluster host or VM is a computer that is running an application that is _not part of a Kubernetes cluster_. {{prodname}} enables you to protect these hosts and VMs using the same Calico network policy that you use for workloads running inside a Kubernetes cluster. It also generates flow logs that provide visibility into the communication that host or VM is having with other endpoints in your environment.

In the following diagram, a Kubernetes cluster is running {{prodname}} with networking (for pod-to-pod communication) and network policy; the non-cluster host uses Calico network policy for host protection and to generate flow logs for observability.

![non-cluster-host](/img/calico-enterprise/non-cluster-host.png)

For non-cluster hosts and VMs, you can secure host interfaces using **host endpoints**. Host endpoints can have labels that work the same as labels on pods/workload endpoints in Kubernetes. The advantage is that you can write network policy rules to apply to both workload endpoints and host endpoints using label selectors; where each selector can refer to the either type (or be a mix of the two). For example, you can easily write a global policy that applies to every host, VM, or pod that is running Calico.

To learn how to restrict traffic to/from hosts using Calico network policy see, [Protect hosts and VMs](../../network-policy/hosts/protect-hosts.mdx).

## Before you begin

**Required**

- Kubernetes API datastore is up and running and is accessible from the host

If {{prodname}} is installed on a cluster, you already have a datastore.

- Non-cluster host or VM meets {{prodname}} [system requirements](../install-on-clusters/requirements.mdx)

Ensure that your node OS includes the `ipset` and `conntrack` kernel dependencies.

## How to

### Set up your Kubernetes cluster to work with a non-cluster host or VM

1. Apply the `NonClusterHost` custom resource. The `endpoint` field is required. It defines the remote endpoint for the non-cluster host log forwarder.

```bash
kubectl create -f - <<EOF
apiVersion: operator.tigera.io/v1
kind: NonClusterHost
metadata:
name: tigera-secure
spec:
endpoint: <endpoint for non-cluster host log forwarder>
EOF
```
Wait until the manager shows a status of Available, then proceed to the next step.
1. Obtain the token for `tigera-noncluster-host` service account.
```bash
kubectl get secret -n calico-system tigera-noncluster-host -o jsonpath='{.data.token}' | base64 --decode
```
1. Use a text editor to create a kubeconfig file for non-cluster host or VM.

Check failure on line 66 in calico-enterprise_versioned_docs/version-3.20-2/getting-started/bare-metal/about.mdx

View workflow job for this annotation

GitHub Actions / runner / vale

[vale] reported by reviewdog 🐶 [Vale.Spelling] Did you really mean 'kubeconfig'? Raw Output: {"message": "[Vale.Spelling] Did you really mean 'kubeconfig'?", "location": {"path": "calico-enterprise_versioned_docs/version-3.20-2/getting-started/bare-metal/about.mdx", "range": {"start": {"line": 66, "column": 34}}}, "severity": "ERROR"}
```yaml
apiVersion: v1
kind: Config
current-context: noncluster-hosts
preferences: {}
clusters:
- cluster:
Certificate-authority-data: <your cluster certificate>
server: <your cluster server>
name: noncluster-hosts
contexts:
- context:
cluster: noncluster-hosts
user: tigera-noncluster-host
name: noncluster-hosts
users:
- name: tigera-noncluster-host
user:
token: <token from previous step>
```
Take the cluster information from an already existing kubeconfig.

Check failure on line 89 in calico-enterprise_versioned_docs/version-3.20-2/getting-started/bare-metal/about.mdx

View workflow job for this annotation

GitHub Actions / runner / vale

[vale] reported by reviewdog 🐶 [Vale.Spelling] Did you really mean 'kubeconfig'? Raw Output: {"message": "[Vale.Spelling] Did you really mean 'kubeconfig'?", "location": {"path": "calico-enterprise_versioned_docs/version-3.20-2/getting-started/bare-metal/about.mdx", "range": {"start": {"line": 89, "column": 58}}}, "severity": "ERROR"}
1. Create a [`HostEndpoint` resource](../../reference/host-endpoints/overview.mdx) for each non-cluster host or VM. The `node` and `expectedIPs` fields are required to match the hostname and the expected interface IP addresses.

Check failure on line 91 in calico-enterprise_versioned_docs/version-3.20-2/getting-started/bare-metal/about.mdx

View workflow job for this annotation

GitHub Actions / runner / vale

[vale] reported by reviewdog 🐶 [Vale.Spelling] Did you really mean 'hostname'? Raw Output: {"message": "[Vale.Spelling] Did you really mean 'hostname'?", "location": {"path": "calico-enterprise_versioned_docs/version-3.20-2/getting-started/bare-metal/about.mdx", "range": {"start": {"line": 91, "column": 179}}}, "severity": "ERROR"}
1. Follow [Configure access to {{prodname}} Manager UI](../../operations/cnx/access-the-manager.mdx) page to configure {{prodname}} Manager access.
### Set up your non-cluster host or VM
1. Install Calico node and fluent-bit log forwarder packages.
- If your non-cluster host or VM has RHEL 8 installed, then run the following command:
```bash
dnf install https://downloads.tigera.io/ee/archives/calico-node-3.20~pre2.0.el8.x86_64.rpm
dnf install https://downloads.tigera.io/ee/archives/calico-fluent-bit-3.1.8.el8.x86_64.rpm
```
- If your non-cluster host or VM has RHEL 9 installed, then run the following command:
```bash
dnf install https://downloads.tigera.io/ee/archives/calico-node-3.20~pre2.0.el9.x86_64.rpm
dnf install https://downloads.tigera.io/ee/archives/calico-fluent-bit-3.1.8.el9.x86_64.rpm
```
Only Red Hat Enterprise Linux 8 and 9 x86-64 operating systems are supported in this version of {{prodname}}.
1. Copy the kubeconfig created in cluster setup step 3 to host folder `/etc/calico/kubeconfig` and change ownership to `calico:calico`.

Check failure on line 115 in calico-enterprise_versioned_docs/version-3.20-2/getting-started/bare-metal/about.mdx

View workflow job for this annotation

GitHub Actions / runner / vale

[vale] reported by reviewdog 🐶 [Vale.Spelling] Did you really mean 'kubeconfig'? Raw Output: {"message": "[Vale.Spelling] Did you really mean 'kubeconfig'?", "location": {"path": "calico-enterprise_versioned_docs/version-3.20-2/getting-started/bare-metal/about.mdx", "range": {"start": {"line": 115, "column": 13}}}, "severity": "ERROR"}
1. Start Calico node and log forwarder.
```bash
systemctl start calico-node.service
systemctl start calico-fluent-bit.service
```
You can configure the Calico node by tuning the environment variables defined in the `/etc/calico/calico-node/calico-node.env` file. For more information, see the [Felix configuration reference](../../reference/resources/felixconfig.mdx).
### Configure hosts to communicate with your Kubernetes cluster
Using {{prodname}} network policy-only mode (i.e. with other CNIs), you must ensure that the non-cluster host or VM can directly communicate with your Kubernetes cluster. Here are some vendor tips:
#### AWS
- For hosts to communicate with your Kubernetes cluster, the node must be in the same VPC as nodes in your Kubernetes cluster, and must use the AWS VPC CNI plugin (used by default in EKS).
- The Kubernetes cluster security group needs to allow traffic from your host endpoint. Make sure that an inbound rule is set so that traffic from your host endpoint node is allowed.
- For a non-cluster host to communicate with an EKS cluster, the correct IAM roles must be configured.
- You also need to provide authentication to your Kubernetes cluster using [aws-iam-authenticator](https://docs.aws.amazon.com/eks/latest/userguide/install-aws-iam-authenticator.html) and the [aws cli](https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-install.html)
#### GKE
For hosts to communicate with your Kubernetes cluster directly, you must make the host directly reachable/routable; this is not set up by default with the VPC native network routing.
## Additional resources
- [Protect hosts and VMs](../../network-policy/hosts/protect-hosts.mdx)

This file was deleted.

Original file line number Diff line number Diff line change
Expand Up @@ -118,10 +118,10 @@
"label": "Install on non-cluster hosts",
"link": {
"type": "doc",
"id": "getting-started/install-on-non-cluster-hosts/index"
"id": "getting-started/bare-metal/index"
},
"items": [
"getting-started/install-on-non-cluster-hosts/about"
"getting-started/bare-metal/about"
]
},
{
Expand Down
4 changes: 2 additions & 2 deletions sidebars-calico-enterprise.js
Original file line number Diff line number Diff line change
Expand Up @@ -95,8 +95,8 @@ module.exports = {
{
type: 'category',
label: 'Install on non-cluster hosts',
link: { type: 'doc', id: 'getting-started/install-on-non-cluster-hosts/index' },
items: ['getting-started/install-on-non-cluster-hosts/about'],
link: { type: 'doc', id: 'getting-started/bare-metal/index' },
items: ['getting-started/bare-metal/about'],
},
{
type: 'category',
Expand Down

0 comments on commit 8815234

Please sign in to comment.