This repository provides basic information about how to begin your journey with Openshift Virtualization on an OpenShift Container Platform installation. All the examples and configuration are prepared for an OCP installation on AWS.
OpenShift Virtualization is an add-on to OpenShift Container Platform that allows you to run and manage virtual machine workloads alongside container workloads. More information here.
ℹ️
|
OpenShift Kubernetes Engine
Red Hat OpenShift Kubernetes Engine is a product offering from Red Hat that lets you use an enterprise-class Kubernetes platform as a production platform for launching containers. Openshift Virtualization is supported on OKE.
|
This repository contains some basic examples of VM configuration for Openshift Virtualization. Maybe you have already configured it and just want to use the VMs in your deployments. If this is the case, please, take a look directly at the VM configurations:
Template | Networking | Configuration | Storage | Location |
---|---|---|---|---|
Fedora-01 |
✔️Pod Network ❌Multus |
❌Secrets ❌ConfigMap |
❌DataVolume ❌SharedFS |
|
Fedora-02 |
✔️Pod Network ✔️Multus |
❌Secrets ❌ConfigMap |
❌DataVolume ❌SharedFS |
|
Fedora-03 |
✔️Pod Network ✔️Multus |
✔️Secrets ✔️ConfigMap |
❌DataVolume ❌SharedFS |
|
Fedora-04 |
✔️Pod Network ❌Multus |
❌Secrets ❌ConfigMap |
✔️DataVolume ❌SharedFS |
|
Fedora-05 |
✔️Pod Network ❌Multus |
❌Secrets ❌ConfigMap |
❌DataVolume ✔️SharedFS |
In order to be able to create VMs on OpenShift on AWS, you will need to add bare metal nodes to your OpenShift installation. The following steps simplify the process of creating a Bare Metal MachineSet:
./scripts/00-create-baremetal-instance.sh
💡
|
AWS Instances
For other sizes of AWS nodes, check the instance types list. Take into account that pricing for Bare Metal nodes is considerably higher than for normal instances. For example, |
For more information, check the official documentation.
Before creating the Network Configuration in your Bare Metal node, it needs to have an extra Network Interface to use for that purpose. The documentation of how to do it is in the AWS documentation.
In general, you will have to access the Network Interfaces configuration for your current region (In my case eu-west-3) and create a Network interface using the following configuration:
-
Description:
Secondary network interface for bare metal node
. -
Subnet: Select the private network for the same region of the bare metal node
eu-west-3
. -
[Optional] Security Groups: You can add the
Security group for Kubernetes ELB
to your interface.
Click on Create network interface
and access the nodes section, click on the metal node and then on the Networking
tab. Attach the new interface to it.
This is automated with the following script:
./scripts/01-add-net-interface.sh ./aws-env-vars
Documentation:
Red Hat OpenShift Virtualization is a Kubernetes operator on RHOCP, based on the Kubernetes virtualization add-on, KubeVirt, which allows the management of VM workloads alongside container workloads.
|
The cluster must be installed on-premise, on bare metal with Red Hat Enterprise Linux CoreOS workers. AWS is not supported yet. |
💡
|
Disambiguation
|
-
Install the operator:
oc apply -k openshift/ocp-virt-operator
-
Create the
HyperConverged
object, which deploys and manages OpenShift Virtualization and its components:oc apply -k openshift/ocp-virt-configuration
If you install the operator using the web console, you will see the following messages during installation:
Click on Create HyperConverged
button to create a default HyperConverged instance to be able to create Virtual Machines.
These are the CRDs that you can interact with in the Installed Operators
section:
-
[HC] OpenShift Virtualization Deployment (HyperConverged) to deploy and manage OpenShift Virtualization and its components, such as the
virt-controller
cluster-level component and thevirt-handler
host-level Daemonset. -
[HPP] HostPathProvisioner deployment (HostPathProvisioners) to create virtual machines that use local node storage. (Not used in this repo).
As you can see, most of the CRDs are not here and you will find them in the new Dynamic Plugin navigation bar on the left of the Web Console.
A VM object specifies a template to create a running instance of the VM inside your cluster. The running instance of a VM is a virtual machine instance (VMI), and it is executed and managed by a container located inside a pod. If a VMI is deleted, another instance is generated based on the VM object configuration.
The default templates are provided by Red Hat. These templates include settings to create generic systems with networking, users, and storage preconfigured. Create the Virtual Machine:
oc process -f vms/01-vm-fedora.yaml | oc apply -f -
The easiest way to SSH the VMs is using the KubeVirt command line interface. You can install it by downloading the binary from the OCP cluster or using the official the documentation.
Now, you can SSH the VM using the following command:
virtctl -n ocp-virt-pgd ssh fedora@fedora-01
You can also access locally a service of the VM forwarding the port to your machine:
oc port-forward $VIRT_LAUNCHER_POD $REMOTE_PORT:$LOCAL_PORT -n $VM_PROJECT
Finally, you can perform extra configuration to automatically add your SSH Public Key to the VM on startup. Check the documentation for more information. Use the following command to set the authorization-keys
on the server:
oc create secret generic user-pub-key --from-file=key1=$HOME/.ssh/id_rsa.pub -n ocp-virt-pgd
You can connect a VM to three different types of networks:
-
Default pod network: To use the default pod network, the network interface must use the Masquerade binding method. A masquerade binding uses NAT to allow other pods in the cluster to communicate with the VMI.
-
Multus: Connect a VM to multiple interfaces and external networks with the Container Networking Interface (CNI) plug-in, Multus. To connect to an external network, you must create a
linux-bridge
network attachment definition that exposes the layer-2 device to a specific namespace. -
Single Root I/O Virtualization: To connect to a virtual function network for high performance.
When the VMI is provisioned, the virt-launcher
pod routes IPv4 traffic to the Dynamic Host Configuration Protocol (DHCP) address of the VMI. This routing makes it possible to also connect to a VMI with a port-forwarding connection.
Now, you have access to the pod network. Do you also want to add a second network to the VM? Great! You will have to use Multus, the NMstate operator and other great projects, so keep reading!
The Kubernetes NMState Operator provides a Kubernetes API for performing state-driven network configuration across the OpenShift Container Platform cluster’s nodes with NMState.
Red Hat OpenShift Virtualization uses the Kubernetes NMState Operator to report on and configure node networking in a declarative way. The Kubernetes NMstate Operator provides the components for declarative node networking in a Red Hat OpenShift cluster.
You can install it by applying the following file:
# If you don't use argo, you need to first comment the nmstate object
oc apply -k openshift/nmstate
After that, it will be useful basically for three things:
-
Check the network configuration for each node using the Node Network State (NNS):
# Check all the network configurations: oc get nns # get the network configuration of an OCP node: oc get nns $NODE_NAME -o yaml
-
Apply new configuration to nodes based on a selector using the Node Network Configuration Policy (NNCP):
oc apply -f openshift/ocp-virt-network/nncp-br1-policy.yaml
-
You can see the Configuration Policies with the following command:
oc get nodenetworkconfigurationpolicy.nmstate.io
-
Finally, after completed successfully, you will see a report in a new object, the Node Network Configuration Enactment (NNCE):
oc get NodeNetworkConfigurationEnactment
-
If something is misconfigured, you can see the error message with the following command:
oc get nnce $NODE_NAME -o jsonpath='{.status.conditions[?(@.type=="Failing")].message}'
ℹ️
|
In order to apply this configuration only to Bare Metal nodes, we are labeling nodes with usage: virtualization in the MachineSet that we created in the first section. For more information, this KCS.
|
ℹ️
|
If you need more information about this topic, you can check the official documentation for the NMstate Operator. |
If you want to compare the configuration before and after setting the Node Network Configuration Policy, you can compare the files that contain the following outputs:
-
docs/examples/metal-node-nns-out-v01.yaml
: Before setting the configuration, there is no Bridgebr1
. -
docs/examples/metal-node-nns-out-v02.yaml
: After setting the configuration, there is a Bridge namedbr1
.
The Multus CNI plug-in acts as a wrapper by calling other CNI plug-ins for advanced networking functionalities, such as attaching multiple network interfaces to pods in an OpenShift cluster.
How to configure it? Use the Network Attachment Definition, which is a namespaced object that exposes existing layer-2 network devices, such as bridges and switches, to VMs and pods.
oc process -f openshift/ocp-virt-network/nad-fedora-external.yaml | oc apply -f -
Many applications require configuration using some combination of configuration files, command line arguments, and environment variables. Both ConfigMaps
and Secrets
are used to provide configuration settings and credentials to Pods.
The following template shows how to create a Secret and a ConfigMap and mount it as a file inside the VM:
oc process -f vms/03-vm-fedora.yaml -p VM_NAME=fedora-03 -p IP_ADDRESS="192.168.51.152/24" | oc apply -f -
OCP-VIRT provides several mechanisms to manage the VM disks. It introduces new resource types to facilitate the process of creating the PVC with optimal parameters for VM disks and copying the disk image into the resulting PV:
-
StorageProfile: For each storage class, a StorageProfile resource gives default values optimized for VM disks. As a developer, when you use a storage profile to prepare a VM disk, the only parameter that you must provide is the disk size.
-
DataVolume: A DataVolume resource describes a VM disk. It groups the PVC definition and the details of the disk image to inject into the PV.
DataVolume resources have two parts:
-
The storage profile specification, which provides the details of the PVC to create. You only need to specify the disk size.
-
The source image details, which provides the disk image to inject into the PV.
The disk type inside the VM depends on the interface that you select when you attach the data volume:
-
scsi
interface: Standard SCSI device. Linux systems name it with the/dev/sdX
format. -
virtio
interface: [Optimal performance] Linux systems name it with the/dev/vdX
format. Some operating systems do not provide that driver by default.
ℹ️
|
When you hot plug a disk to a running VM, scsi is the only available interface.
|
The source section of a DataVolume resource provides the details of the disk image to inject into the persistent volume (PV).
-
Blank (creates PVC).
-
Import via URL (creates PVC).
-
Use an existing PVC.
-
Clone existing PVC (creates PVC).
-
Import via Registry (creates PVC).
-
Container (ephemeral).
Adding an extra block disk is as simple as creating a DataVolume
with .spec.source.blank: {}
and attach it to the VM. In the Template I also add the commands to generate the filesystem in the cloud-init for the sake of simplicity:
# Create a VM and its blank disk at the same time
oc process -f vms/04-vm-fedora.yaml -p VM_NAME=fedora-04 | oc apply -f -
|
Currently, this chapter is not working as expected as virtiofs is an Experimental feature for Kubevirt and, without it, you cannot mount File Systems.
|
The only StorageClass available by default on OCP on AWS is GP-2 and GP-3 which are AWS Elastic Block Store. This does not allow us to create RWX File Systems. Therefore, we have to add the AWS Elastic File Service CSI Driver Operator to access EFS or ODF (Openshift Data Foundation).
Access this documentation to know an automated process to configure AWS EFS in an OCP cluster deployed on AWS:
>> Click Here <<
Access this documentation to know an automated process to configure Openshift Data Foundation on OCP on AWS:
>> Click Here <<
ℹ️
|
Before creating the VM objects, you have to make sure that you updated the HyperConverged to enable the feature gate
ExperimentalVirtiofsSupport. Please check the documentation on how to mount filesystems on VMs and the desired FeatureGate. Also, check the documentation on how to enable feature gates using annotations on the HyperConverged object. You can see an example of how to do it in the HyperConverged object of this repo.
|
Before creating the VM, you need to provide privileged SCC to the kubevirt-controller
Service Account. If not, you will face the following errors when trying to create the virt-launcher
pods:
oc adm policy add-scc-to-user privileged system:serviceaccount:openshift-cnv:kubevirt-controller
The following commands allow you to create two VMs using the new Storage Class:
oc process -f vms/05-vm-shared-disk.yaml | oc apply -f -
oc process -f vms/05-vm-shared-disk.yaml -p STORAGE_CLASS_NAME=ocs-storagecluster-cephfs | oc apply -f -
oc process -f vms/05-vm-fedora.yaml -p VM_NAME=fedora-05-a | oc apply -f -
oc process -f vms/05-vm-fedora.yaml -p VM_NAME=fedora-05-b | oc apply -f -
To quickly deploy a container with tools to check connectivity, I normally use the UBI version of the Red Hat Enterprise Linux Support Tools which can be found in the RH Container Catalog.
You can deploy this container using the following script:
oc process -f docs/ocp-tools/01-toolbox.yaml -p POD_PROJECT=ocp-virt-pgd | oc apply -f -
In some cases, networking configuration could be tricky. That’s why in this document I compare several VM configuration combinations and their real configuration in the machine.
>> Click Here <<