diff --git a/docs/advanced/addons/rancher-vcluster.md b/docs/advanced/addons/rancher-vcluster.md index 0b889df3dd7..75bcbf03559 100644 --- a/docs/advanced/addons/rancher-vcluster.md +++ b/docs/advanced/addons/rancher-vcluster.md @@ -35,7 +35,7 @@ After installing the addon, you need to configure it from the Harvester UI as fo 1. Select **Advanced** > **Addons**. 1. Find the `rancher-vcluster` addon and select **⋮** > **Edit Config**. -![](/img/v1.2/rancher-vcluster/VclusterConfig.png) + ![](/img/v1.2/rancher-vcluster/VclusterConfig.png) 1. In the **Hostname** field, enter a valid DNS record pointing to the Harvester VIP. This is essential as the vcluster ingress is synced to the parent Harvester cluster. A valid hostname is used to filter ingress traffic to the vcluster workload. 1. In the **Bootstrap Password** field, enter the bootstrap password for the new Rancher deployed on the vcluster. diff --git a/docs/advanced/settings.md b/docs/advanced/settings.md index 107e366b64b..4d5688b2b79 100644 --- a/docs/advanced/settings.md +++ b/docs/advanced/settings.md @@ -1,5 +1,4 @@ --- -id: index sidebar_position: 1 sidebar_label: Settings title: "Settings" diff --git a/docs/advanced/storagenetwork.md b/docs/advanced/storagenetwork.md index d231c4f2288..502011ea1b4 100644 --- a/docs/advanced/storagenetwork.md +++ b/docs/advanced/storagenetwork.md @@ -48,17 +48,17 @@ kubectl apply -f https://raw.githubusercontent.com/harvester/harvester/v1.1.0/de ## Configuration Example - VLAN ID - - Please check with your network switch setting, and provide a dedicated VLAN ID for Storage Network. + - Please check with your network switch setting, and provide a dedicated VLAN ID for Storage Network. - Well-configured Cluster Network and VLAN Config - - Please refer Networking page for more details and configure `Cluster Network` and `VLAN Config` but not `Networks`. + - Please refer Networking page for more details and configure `Cluster Network` and `VLAN Config` but not `Networks`. - IP range for Storage Network - - IP range should not conflict or overlap with Kubernetes cluster networks(`10.42.0.0/16`, `10.43.0.0/16`, `10.52.0.0/16` and `10.53.0.0/16` are reserved). - - IP range should be in IPv4 CIDR format and Longhorn pods use Storage Network as follows: - - `instance-manger-e` and `instance-manager-r` pods: These require 2 IPs per node. During an upgrade, two versions of these pods will exist (old and new), and the old version will be deleted once the upgrade is successful. - - `backing-image-ds` pods: These are employed to process on-the-fly uploads and downloads of backing image data sources. These pods will be removed once the image uploads or downloads are completed. - - `backing-image-manager` pods: 1 IP per disk, similar to the instance manager pods. Two versions of these will coexist during an upgrade, and the old ones will be removed after the upgrade is completed. - - The required number of IPs is calculated using a simple formula: `Required Number of IPs = Number of Nodes * 4 + Number of Disks * 2 + Number of Images to Download/Upload` - - For example, if your cluster has five nodes, each node has two disks, and ten images will be uploaded simultaneously, the IP range should be greater than or equal to `/26` (`5 * 4 + 5 * 2 * 2 + 10 = 50`). + - IP range should not conflict or overlap with Kubernetes cluster networks(`10.42.0.0/16`, `10.43.0.0/16`, `10.52.0.0/16` and `10.53.0.0/16` are reserved). + - IP range should be in IPv4 CIDR format and Longhorn pods use Storage Network as follows: + - `instance-manger-e` and `instance-manager-r` pods: These require 2 IPs per node. During an upgrade, two versions of these pods will exist (old and new), and the old version will be deleted once the upgrade is successful. + - `backing-image-ds` pods: These are employed to process on-the-fly uploads and downloads of backing image data sources. These pods will be removed once the image uploads or downloads are completed. + - `backing-image-manager` pods: 1 IP per disk, similar to the instance manager pods. Two versions of these will coexist during an upgrade, and the old ones will be removed after the upgrade is completed. + - The required number of IPs is calculated using a simple formula: `Required Number of IPs = Number of Nodes * 4 + Number of Disks * 2 + Number of Images to Download/Upload` + - For example, if your cluster has five nodes, each node has two disks, and ten images will be uploaded simultaneously, the IP range should be greater than or equal to `/26` (`5 * 4 + 5 * 2 * 2 + 10 = 50`). We will take the following configuration as an example to explain the details of the Storage Network diff --git a/docs/airgap.md b/docs/airgap.md index 358d0f42d4e..a6071a47c7e 100644 --- a/docs/airgap.md +++ b/docs/airgap.md @@ -1,5 +1,4 @@ --- -id: airgap sidebar_position: 3 sidebar_label: Air Gapped Environment title: "Air Gapped Environment" diff --git a/docs/authentication.md b/docs/authentication.md index b3ce0a54e12..87e9ddd1415 100644 --- a/docs/authentication.md +++ b/docs/authentication.md @@ -1,5 +1,4 @@ --- -id: authentication sidebar_position: 5 sidebar_label: Authentication title: "Authentication" diff --git a/docs/faq.md b/docs/faq.md index 14dd72267b6..9a20d1a6463 100644 --- a/docs/faq.md +++ b/docs/faq.md @@ -1,5 +1,4 @@ --- -id: faq sidebar_position: 17 sidebar_label: FAQ title: "FAQ" diff --git a/docs/host/_category_.json b/docs/host/_category_.json index b561b199771..7ff08c814ab 100644 --- a/docs/host/_category_.json +++ b/docs/host/_category_.json @@ -5,6 +5,6 @@ "collapsed": false, "link": { "type": "doc", - "id": "host-management" + "id": "host" } } \ No newline at end of file diff --git a/docs/host/host.md b/docs/host/host.md index 33a5c7a1826..1130f468226 100644 --- a/docs/host/host.md +++ b/docs/host/host.md @@ -1,5 +1,4 @@ --- -id: host-management sidebar_position: 1 sidebar_label: Host Management title: "Host Management" diff --git a/docs/index.md b/docs/index.md index d8792bc6aad..767d6847ccb 100644 --- a/docs/index.md +++ b/docs/index.md @@ -1,5 +1,4 @@ --- -id: overview sidebar_position: 1 sidebar_label: Harvester Overview slug: / @@ -21,7 +20,7 @@ The Harvester architecture consists of cutting-edge open-source technologies: - **Built on top of Kubernetes.** [Kubernetes](https://kubernetes.io/) has become the predominant infrastructure language across all form factors, and Harvester is an HCI solution with Kubernetes under the hood. - **Virtualization management with Kubevirt.** [Kubevirt](https://kubevirt.io/) provides virtualization management using KVM on top of Kubernetes. - **Storage management with Longhorn.** [Longhorn](https://longhorn.io/) provides distributed block storage and tiering. -- **Observability with Grafana and Prometheus.** [Granfana](https://grafana.com/) and [Prometheus](https://prometheus.io/) provide robust monitoring and logging. +- **Observability with Grafana and Prometheus.** [Grafana](https://grafana.com/) and [Prometheus](https://prometheus.io/) provide robust monitoring and logging. ![](/img/v1.2/architecture.svg) diff --git a/docs/install/iso-install.md b/docs/install/iso-install.md index 6236c5a5224..8048b76fc06 100644 --- a/docs/install/iso-install.md +++ b/docs/install/iso-install.md @@ -1,5 +1,4 @@ --- -id: index sidebar_position: 2 sidebar_label: ISO Installation title: "ISO Installation" diff --git a/docs/install/requirements.md b/docs/install/requirements.md index 06c108efbd2..6bd9d97800f 100644 --- a/docs/install/requirements.md +++ b/docs/install/requirements.md @@ -1,5 +1,4 @@ --- -id: requirements sidebar_position: 1 sidebar_label: Hardware and Network Requirements title: "Hardware and Network Requirements" diff --git a/docs/logging/harvester-logging.md b/docs/logging/harvester-logging.md index dc85a72aad5..1a90f47550b 100644 --- a/docs/logging/harvester-logging.md +++ b/docs/logging/harvester-logging.md @@ -1,5 +1,4 @@ --- -id: harvester-logging sidebar_position: 1 sidebar_label: Logging title: "Logging" diff --git a/docs/monitoring/harvester-monitoring.md b/docs/monitoring/harvester-monitoring.md index c66d2663d37..562f1f7520d 100644 --- a/docs/monitoring/harvester-monitoring.md +++ b/docs/monitoring/harvester-monitoring.md @@ -1,5 +1,4 @@ --- -id: harvester-monitoring sidebar_position: 1 sidebar_label: Monitoring title: "Monitoring" diff --git a/docs/networking/clusternetwork.md b/docs/networking/clusternetwork.md index 4d4ca8ff288..1d8b5e4bd6c 100644 --- a/docs/networking/clusternetwork.md +++ b/docs/networking/clusternetwork.md @@ -1,5 +1,4 @@ --- -id: index sidebar_position: 1 sidebar_label: Cluster Network title: "Cluster Network" diff --git a/docs/networking/loadbalancer.md b/docs/networking/loadbalancer.md index 5f6e55a25f5..8bfb9135d02 100644 --- a/docs/networking/loadbalancer.md +++ b/docs/networking/loadbalancer.md @@ -39,7 +39,7 @@ Harvester VM load balancer doesn't support Windows VMs because the guest agent i To create a new Harvester VM load balancer: 1. Go to the **Networks > Load Balancer** page and select **Create**. 1. Select the **Namespace** and specify the **Name**. -1. Go to the **Basic** tab to choose the IPAM mode, which can be **DHCP** or **IP Pool**. If you select **IP Pool**, prepare an IP pool first, specify the IP pool name, or choose **auto**. If you choose **auto**, the LB automatically selects an IP pool according to [the IP pool selection policy](/networking/ippool.md/#selection-policy). +1. Go to the **Basic** tab to choose the IPAM mode, which can be **DHCP** or **IP Pool**. If you select **IP Pool**, prepare an IP pool first, specify the IP pool name, or choose **auto**. If you choose **auto**, the LB automatically selects an IP pool according to [the IP pool selection policy](./ippool.md#selection-policy). ![](/img/v1.2/networking/create-lb-01.png) 1. Go to the **Listeners** tab to add listeners. You must specify the **Port**, **Protocol**, and **Backend Port** for each listener. ![](/img/v1.2/networking/create-lb-02.png) @@ -66,4 +66,4 @@ In conjunction with Harvester Cloud Provider, the Harvester load balancer provid ![](/img/v1.2/networking/guest-kubernetes-cluster-lb.png) When you create, update, or delete an LB service on a guest cluster with Harvester Cloud Provider, the Harvester Cloud Provider will create a Harvester LB automatically. -For more details, refer to [Harvester Cloud Provider](/rancher/cloud-provider.md). +For more details, refer to [Harvester Cloud Provider](../rancher/cloud-provider.md). diff --git a/docs/rancher/cloud-provider.md b/docs/rancher/cloud-provider.md index 25a1f4640aa..64c0ef3e978 100644 --- a/docs/rancher/cloud-provider.md +++ b/docs/rancher/cloud-provider.md @@ -44,7 +44,7 @@ For a detailed support matrix, please refer to the **Harvester CCM & CSI Driver ### Deploying to the RKE1 Cluster with Harvester Node Driver When spinning up an RKE cluster using the Harvester node driver, you can perform two steps to deploy the `Harvester` cloud provider: -1. Select `Harvester(Out-of-tree)` option. +1. Select `Harvester (Out-of-tree)` option. ![](/img/v1.2/rancher/rke-cloud-provider.png) diff --git a/docs/rancher/csi-driver.md b/docs/rancher/csi-driver.md index 55aac28baf8..ca51242a9d4 100644 --- a/docs/rancher/csi-driver.md +++ b/docs/rancher/csi-driver.md @@ -34,7 +34,7 @@ Currently, the Harvester CSI driver only supports single-node read-write(RWO) vo ### Deploying with Harvester RKE1 node driver -- Select the `Harvester(Out-of-tree)` option. +- Select the `Harvester (Out-of-tree)` option. ![](/img/v1.2/rancher/rke-cloud-provider.png) diff --git a/docs/rancher/rancher-integration.md b/docs/rancher/rancher-integration.md index 04263b57b1a..9ec5957efef 100644 --- a/docs/rancher/rancher-integration.md +++ b/docs/rancher/rancher-integration.md @@ -1,5 +1,4 @@ --- -id: index sidebar_position: 1 sidebar_label: Rancher Integration title: "Rancher Integration" diff --git a/docs/terraform/terraform-provider.md b/docs/terraform/terraform-provider.md index 150507267db..9412b8544a2 100644 --- a/docs/terraform/terraform-provider.md +++ b/docs/terraform/terraform-provider.md @@ -1,5 +1,4 @@ --- -id: terraform-provider sidebar_position: 1 sidebar_label: Harvester Terraform Provider title: "Harvester Terraform Provider" diff --git a/docs/troubleshooting/installation.md b/docs/troubleshooting/installation.md index 90a5f1ae154..e52a3af94d7 100644 --- a/docs/troubleshooting/installation.md +++ b/docs/troubleshooting/installation.md @@ -1,5 +1,4 @@ --- -id: index sidebar_position: 1 sidebar_label: Installation title: "Installation" diff --git a/docs/upgrade/automatic.md b/docs/upgrade/automatic.md index f725f98a564..388f6e4dcc2 100644 --- a/docs/upgrade/automatic.md +++ b/docs/upgrade/automatic.md @@ -1,5 +1,4 @@ --- -id: index sidebar_position: 1 sidebar_label: Upgrading Harvester title: "Upgrading Harvester" diff --git a/docs/upload-image.md b/docs/upload-image.md index 9e6dbab469b..c477b01b342 100644 --- a/docs/upload-image.md +++ b/docs/upload-image.md @@ -1,5 +1,4 @@ --- -id: upload-image sidebar_position: 6 sidebar_label: Upload Images title: "Upload Images" diff --git a/docs/vm/create-vm.md b/docs/vm/create-vm.md index 0b29c2b0fef..ffe11aa8ef7 100644 --- a/docs/vm/create-vm.md +++ b/docs/vm/create-vm.md @@ -1,5 +1,4 @@ --- -id: index sidebar_position: 1 sidebar_label: Create a Virtual Machine title: "Create a Virtual Machine" diff --git a/docs/vm/hotplug-volume.md b/docs/vm/hotplug-volume.md index 427ddff7fd1..b8c15ec3fcf 100644 --- a/docs/vm/hotplug-volume.md +++ b/docs/vm/hotplug-volume.md @@ -28,9 +28,9 @@ The following steps assume that you have a running VM and a ready volume: 1. Go to the **Virtual Machines** page. 1. Find the VM that you want to add a volume to and select **⋮ > Add Volume**. - ![Add Volume Button](/img/v1.2/vm/add-volume-button.png) + ![Add Volume Button](/img/v1.2/vm/add-volume-button.png) 1. Enter the **Name** and select the **Volume**. 1. Click **Apply**. - ![Add Volume Panel](/img/v1.2/vm/add-volume-panel.png) + ![Add Volume Panel](/img/v1.2/vm/add-volume-panel.png) diff --git a/docs/vm/resource-overcommit.md b/docs/vm/resource-overcommit.md index 906706222a5..5fe2c652b47 100644 --- a/docs/vm/resource-overcommit.md +++ b/docs/vm/resource-overcommit.md @@ -34,12 +34,12 @@ Users can modify the global `overcommit-config` by following the steps below, an 1. Go to the **Advanced > Settings** page. - ![overcommit page](/img/v1.2/vm/overcommit-page.png) + ![overcommit page](/img/v1.2/vm/overcommit-page.png) 1. Find the `overcommit-config` setting. 1. Configure the desired CPU, Memory, and Storage ratio. - ![overcommit panel](/img/v1.2/vm/overcommit-panel.png) + ![overcommit panel](/img/v1.2/vm/overcommit-panel.png) ## Configure overcommit for a single virtual machine diff --git a/docs/volume/create-volume.md b/docs/volume/create-volume.md index 8c9945a9d7a..ada0a3c5ed0 100644 --- a/docs/volume/create-volume.md +++ b/docs/volume/create-volume.md @@ -1,5 +1,4 @@ --- -id: index sidebar_position: 1 sidebar_label: Create a Volume title: "Create a Volume" diff --git a/docs/volume/volume-snapshots.md b/docs/volume/volume-snapshots.md index 18bfd266b29..4115735eb9e 100644 --- a/docs/volume/volume-snapshots.md +++ b/docs/volume/volume-snapshots.md @@ -21,11 +21,11 @@ You can create a volume snapshot from an existing volume by following these step 1. Choose the volume that you want to take a snapshot of and select **⋮ > Take Snapshot**. - ![create-volume-snapshot-1](/img/v1.2/volume/create-volume-snapshot-1.png) + ![create-volume-snapshot-1](/img/v1.2/volume/create-volume-snapshot-1.png) 1. Enter a **Name** for the snapshot. - ![create-volume-snapshot-2](/img/v1.2/volume/create-volume-snapshot-2.png) + ![create-volume-snapshot-2](/img/v1.2/volume/create-volume-snapshot-2.png) 1. Select **Create** to finish creating a new volume snapshot. @@ -45,16 +45,16 @@ You can restore a new volume from an existing volume snapshot by following these 1. Select **⋮ > Restore**. - ![restore-volume-snapshot-1](/img/v1.2/volume/restore-volume-snapshot-1.png) + ![restore-volume-snapshot-1](/img/v1.2/volume/restore-volume-snapshot-1.png) - ![restore-volume-snapshot-2](/img/v1.2/volume/restore-volume-snapshot-2.png) + ![restore-volume-snapshot-2](/img/v1.2/volume/restore-volume-snapshot-2.png) 1. Specify the **Name** of the new volume. - ![restore-volume-snapshot-3](/img/v1.2/volume/restore-volume-snapshot-3.png) + ![restore-volume-snapshot-3](/img/v1.2/volume/restore-volume-snapshot-3.png) 1. If the source volume is not an image volume, you can select a different **StorageClass**. You can not change the **StorageClass** if the source volume is an image volume. - ![restore-volume-snapshot-4](/img/v1.2/volume/restore-volume-snapshot-4.png) + ![restore-volume-snapshot-4](/img/v1.2/volume/restore-volume-snapshot-4.png) 1. Select **Create** to finish restoring a new volume. diff --git a/i18n/zh/docusaurus-plugin-content-docs/current.json b/i18n/zh/docusaurus-plugin-content-docs/current.json index f979271c23e..980345c689a 100644 --- a/i18n/zh/docusaurus-plugin-content-docs/current.json +++ b/i18n/zh/docusaurus-plugin-content-docs/current.json @@ -498,5 +498,9 @@ "sidebar.api.doc.List Virtual Machines For All Namespaces": { "message": "List Virtual Machines For All Namespaces", "description": "The label for the doc item List Virtual Machines For All Namespaces in sidebar api, linking to the doc api/list-virtual-machine-for-all-namespaces" + }, + "sidebar.docs.category.Available Addons": { + "message": "可用插件", + "description": "The label for category Available Addons in sidebar docs" } } diff --git a/i18n/zh/docusaurus-plugin-content-docs/current/advanced/addons.md b/i18n/zh/docusaurus-plugin-content-docs/current/advanced/addons.md index f57a0e43594..a09ff64fdff 100644 --- a/i18n/zh/docusaurus-plugin-content-docs/current/advanced/addons.md +++ b/i18n/zh/docusaurus-plugin-content-docs/current/advanced/addons.md @@ -1,28 +1,48 @@ --- -sidebar_position: 4 +sidebar_position: 5 sidebar_label: 插件 title: "插件" --- -_从 v1.1.0 起可用_ - -从 v1.1.0 开始,Harvester 将使用插件(Addon)来提供可选功能。 +Harvester 将使用插件(Addon)来提供可选功能。 这样,我们能够确保 Harvester 占用较少的空间,同时用户能够根据他们的实际用例或要求启用/禁用功能。 不同的底层插件支持不同程度的定制。 -v1.1.0 目前附带了两个插件: -* [pcidevices-controller](./pcidevices.md) -* [vm-import-controller](./vmimport.md) - -![](/img/v1.2/addons/DefaultAddons.png) - -_从 v1.2.0 起可用_ - -v1.2.0 附带了另外两个插件: +_从 v1.1.0 起可用_ +Harvester v1.2.0 附带了五个插件: +* [pcidevices-controller](./addons/pcidevices.md) +* [vm-import-controller](./addons/vmimport.md) * [rancher-monitoring](../monitoring/harvester-monitoring.md) * [rancher-logging](../logging/harvester-logging.md) +* [harvester-seeder](./addons/seeder.md) ![](/img/v1.2/addons/AddonsV120.png) + +:::note + +**harvester-seeder** 作为 Harvester v1.2.0 中的实验性功能发布,并在 **Name** 中添加了一个 **Experimental** 标签。 + +::: + +你可以通过选择插件并从 **Basic** 选项卡中选择 **⋮** > **Enable** 来启用**已禁用**的插件。 + +![](/img/v1.2/addons/enable-rancher-logging-addon.png) + +成功启用插件后,**State** 将变为 **DeploySuccessful**。 + +![](/img/v1.2/addons/deploy-successful-addon.png) + +你可以通过选择插件并从 **Basic** 选项卡中选择 **⋮** > **Disable** 来禁用**已启用**的插件。 + +![](/img/v1.2/addons/disable-rancher-monitoring-addon.png) + +当插件成功禁用后,**State** 将变为 **Disabled**。 + +:::note + +禁用插件后,配置数据将被存储,以便在再次启用插件时重复使用。 + +::: \ No newline at end of file diff --git a/i18n/zh/docusaurus-plugin-content-docs/current/advanced/addons/_category_.json b/i18n/zh/docusaurus-plugin-content-docs/current/advanced/addons/_category_.json new file mode 100644 index 00000000000..741c66169b8 --- /dev/null +++ b/i18n/zh/docusaurus-plugin-content-docs/current/advanced/addons/_category_.json @@ -0,0 +1,6 @@ +{ + "position": 6, + "label": "可用插件", + "collapsible": true, + "collapsed": true +} \ No newline at end of file diff --git a/i18n/zh/docusaurus-plugin-content-docs/current/advanced/pcidevices.md b/i18n/zh/docusaurus-plugin-content-docs/current/advanced/addons/pcidevices.md similarity index 62% rename from i18n/zh/docusaurus-plugin-content-docs/current/advanced/pcidevices.md rename to i18n/zh/docusaurus-plugin-content-docs/current/advanced/addons/pcidevices.md index a90d9af4dd0..03785b2eae4 100644 --- a/i18n/zh/docusaurus-plugin-content-docs/current/advanced/pcidevices.md +++ b/i18n/zh/docusaurus-plugin-content-docs/current/advanced/addons/pcidevices.md @@ -1,7 +1,7 @@ --- -sidebar_position: 6 +sidebar_position: 2 sidebar_label: PCI 设备 -title: "PCI 设备(实验功能)" +title: "PCI 设备" --- _从 v1.1.0 起可用_ @@ -15,6 +15,8 @@ _从 v1.1.0 起可用_ ![](/img/v1.2/vm-import-controller/EnableAddon.png) +成功部署 `pcidevices-controller` 插件后,可能需要几分钟时间进行扫描并使 PCIDevice CRD 变得可用。 +![](/img/v1.2/pcidevices/PcideviceEnabled.png) ## 在 PCI 设备上启用直通 1. 前往 `Advanced > PCI Devices` 页面: @@ -57,3 +59,32 @@ _从 v1.1.0 起可用_ ## 在 VM 内为 PCI 设备安装驱动程序 这里涉及的操作与在主机中安装驱动程序一样。PCI 透传功能将主机设备绑定到 `vfio-pci` 驱动程序,让 VM 能够使用自己的驱动程序。你可以查看安装在 VM 中的 NVIDIA 驱动程序的[屏幕截图](https://tobilehman.com/posts/suse-harvester-pci/#toc),其中包括证明设备驱动程序可以正常工作的 CUDA 示例。 + +## SRIOV 网络设备 +_从 v1.2.0 起可用_ + +![](/img/v1.2/pcidevices/SriovNetworkDevicesLink.png) + +`pcidevices-controller` 插件现在可以扫描底层主机上的网络接口并检查它们是否支持 SRIOV Virtual Function (VF)。如果找到有效的设备,`pcidevices-controller` 将生成一个新的`SRIOVNetworkDevice` 对象。 + +![](/img/v1.2/pcidevices/SriovNetworkDevicesList.png) + +要在 SriovNetworkDevice 上创建 VF,你可以单击 **⋮ > Enable**,然后定义 **Number of Virtual Functions**。 +![](/img/v1.2/pcidevices/SriovNetworkDeviceEnable.png) + +![](/img/v1.2/pcidevices/SriovNetworkVFDefinition.png) + +`pcidevices-controller` 将定义网络接口上的 VF,并为新创建的 VF 报告新的 PCI 设备状态。 + +![](/img/v1.2/pcidevices/SriovNetworkDevicesVFStatus.png) + +下次重新扫描时,`pcidevices-controller` 将为 VF 创建 PCIDevices。这可能需要 1 分钟的时间。 + +你现在可以导航到 **PCI Devices** 页面来查看新设备。 + +我们还引入了一个新的过滤器来帮助你通过底层网络接口来过滤 PCI 设备。 + +![](/img/v1.2/pcidevices/SriovNetworkDevicesFilter.png) + +新创建的 PCI 设备可以像其他 PCI 设备一样直通到虚拟机。 +![](/img/v1.2/pcidevices/SriovNetworkDevicesFilterResult.png) \ No newline at end of file diff --git a/i18n/zh/docusaurus-plugin-content-docs/current/advanced/addons/rancher-vcluster.md b/i18n/zh/docusaurus-plugin-content-docs/current/advanced/addons/rancher-vcluster.md new file mode 100644 index 00000000000..db283613792 --- /dev/null +++ b/i18n/zh/docusaurus-plugin-content-docs/current/advanced/addons/rancher-vcluster.md @@ -0,0 +1,55 @@ +--- +sidebar_position: 5 +sidebar_label: Rancher Manager +title: "Rancher Manager(实验性)" +--- + +_从 v1.2.0 起可用_ + +`rancher-vcluster` 插件用于将 Rancher Manager 作为底层 Harvester 集群上的工作负载运行,该功能是使用 [vcluster](https://www.vcluster.com/) 实现的。 + +![](/img/v1.2/vm-import-controller/EnableAddon.png) + +该插件在 `rancher-vcluster` 命名空间中运行嵌套的 K3s 集群,并将 Rancher 部署到该集群。 + +在安装过程中,Rancher 的 ingress 会同步到 Harvester 集群,从而允许最终用户访问 Rancher。 + +## 安装 rancher-vcluster + +Harvester 没有附带 `rancher-vcluster` 插件,但你可以在 [expreimental-addon 仓库](https://github.com/harvester/experimental-addons)中找到该插件。 + +假设你使用 Harvester kubeconfig,你可以运行以下命令来安装插件: + +``` +kubectl apply -f https://raw.githubusercontent.com/harvester/experimental-addons/main/rancher-vcluster/rancher-vcluster.yaml +``` + +## 配置 rancher-vcluster + +安装插件后,你需要从 Harvester UI 进行配置,如下所示: + +1. 选择 **Advanced** > **Addons**。 +1. 找到 `rancher-vcluster` 插件并选择 **⋮** > **Edit Config**。 + + ![](/img/v1.2/rancher-vcluster/VclusterConfig.png) + +1. 在 **Hostname** 字段中,输入指向 Harvester VIP 的有效 DNS 记录。该步骤非常重要,因为 vcluster ingress 会同步到父 Harvester 集群。有效的主机名用于过滤 vcluster 工作负载的 ingress 流量。 +1. 在 **Bootstrap Password** 字段中,输入部署在 vcluster 上的 Rancher 的引导新密码。 + +部署插件后,Rancher 可能需要几分钟时间才能使用。 + +然后,你可以通过你提供的主机名 DNS 记录访问 Rancher。 + +有关更多信息,请参阅 [Rancher 集成](../../rancher/virtualization-management.md)。 + +:::note 禁用 rancher-vcluster + +`rancher-vcluster` 插件部署在使用 Longhorn PVC 的 `vcluster` Statefulset 上。 + +禁用 `rancher-vcluster` 时,PVC `data-rancher-vcluster-0` 将保留在 `rancher-vcluster` 命名空间中。 + +如果你再次启用该插件,PVC 将被重新使用,Rancher 将再次恢复先前状态。 + +如果要擦除数据,请确保 PVC 已被删除。 + +::: \ No newline at end of file diff --git a/i18n/zh/docusaurus-plugin-content-docs/current/advanced/addons/seeder.md b/i18n/zh/docusaurus-plugin-content-docs/current/advanced/addons/seeder.md new file mode 100644 index 00000000000..906ad013180 --- /dev/null +++ b/i18n/zh/docusaurus-plugin-content-docs/current/advanced/addons/seeder.md @@ -0,0 +1,51 @@ +--- +sidebar_position: 4 +sidebar_label: Seeder +title: "Seeder" +--- + +_从 v1.2.0 起可用_ + +`harvester-seeder` 插件用于在底层节点上执行带外操作。 + +如果裸机节点支持基于 redfish 的访问,那么该插件还能发现节点上的硬件和硬件事件,然后将硬件与相应的 Harvester 节点相关联。 + +你可以从 **Addons** 页面启用 `harvester-seeder` 插件。 + +![](/img/v1.2/vm-import-controller/EnableAddon.png) + +启用插件后,找到所需的主机并选择 **Edit Config**,然后转到 **Out-Of-Band Access** 选项卡。 + +![](/img/v1.2/seeder/EditConfig.png) + +![](/img/v1.2/seeder/OutOfBandAccess.png) + +`seeder` 利用 `ipmi` 来管理底层节点硬件。 + +硬件发现和事件检测需要 `redfish` 支持。 + +## 电源操作 + +为节点定义带外配置后,你可以将该节点置于 `Maintenance` 模式,该模式允许你根据需要关闭或重启节点。 + +![](/img/v1.2/seeder/ShutdownReboot.png) + +节点关闭后,你还可以选择 **Power On** 来重新开机: + +![](/img/v1.2/seeder/PowerOn.png) + + +## 硬件事件聚合 + +如果你在 **Out-of-Band Access** 中启用了 **Event**,`seeder` 将利用 `redfish` 来查询底层硬件以获取组件故障和风扇温度信息。 + +此信息与 Harvester 节点关联,可用作 Kubernetes 事件。 + +![](/img/v1.2/seeder/HardwareEvents.png) + + +:::info + +有时,你可能会卡在 `Out-Of-Band Access` 部分并看到消息 `Waiting for "inventories.metal.harvesterhci.io" to be ready`。在这种情况下,你需要刷新页面。有关详细信息,请参阅此 [issue](https://github.com/harvester/harvester/issues/4412)。 + +::: \ No newline at end of file diff --git a/i18n/zh/docusaurus-plugin-content-docs/current/advanced/vmimport.md b/i18n/zh/docusaurus-plugin-content-docs/current/advanced/addons/vmimport.md similarity index 92% rename from i18n/zh/docusaurus-plugin-content-docs/current/advanced/vmimport.md rename to i18n/zh/docusaurus-plugin-content-docs/current/advanced/addons/vmimport.md index 3fd1b2513f5..a1ccd496c57 100644 --- a/i18n/zh/docusaurus-plugin-content-docs/current/advanced/vmimport.md +++ b/i18n/zh/docusaurus-plugin-content-docs/current/advanced/addons/vmimport.md @@ -1,20 +1,20 @@ --- -sidebar_position: 5 +sidebar_position: 3 sidebar_label: VM 导入 title: "VM 导入" --- _从 v1.1.0 起可用_ -从 v1.1.0 开始,用户可以将他们的 VMWare 和 OpenStack 虚拟机导入到 Harvester。 +从 v1.1.0 开始,你可以将 VMWare 和 OpenStack 虚拟机导入到 Harvester。 这是通过 vm-import-controller 插件来实现的。 -要使用 VM 导入功能,用户需要启用 vm-import-controller 插件。 +要使用 VM 导入功能,你需要启用 vm-import-controller 插件。 ![](/img/v1.2/vm-import-controller/EnableAddon.png) -默认情况下,vm-import-controller 使用 /var/lib/kubelet 挂载的临时存储。 +默认情况下,vm-import-controller 使用从 /var/lib/kubelet 挂载的临时存储。 在迁移过程中,大型 VM 的节点可能会用尽挂载点上的空间,进而导致后续调度失败。 @@ -65,7 +65,7 @@ stringData: 作为调协过程的一部分,控制器将登录到 vCenter 并验证源的 `spec` 中指定的 `dc` 是否有效。 -通过此检查后,源将被标记为 Ready,可用于 VM 迁移: +通过此检查后,源将被标记为 Ready 并可用于虚拟机迁移: ```shell $ kubectl get vmwaresource.migration diff --git a/i18n/zh/docusaurus-plugin-content-docs/current/advanced/csidriver.md b/i18n/zh/docusaurus-plugin-content-docs/current/advanced/csidriver.md new file mode 100644 index 00000000000..27d67263af7 --- /dev/null +++ b/i18n/zh/docusaurus-plugin-content-docs/current/advanced/csidriver.md @@ -0,0 +1,70 @@ +--- +sidebar_position: 3 +sidebar_label: 第三方存储支持 +title: "第三方存储支持" +--- + +_从 v1.2.0 起可用_ + +Harvester 现在支持在 Harvester 集群中安装[容器存储接口 (CSI)](https://kubernetes-csi.github.io/docs/introduction.html)。你可以将外部存储用于虚拟机的非系统数据磁盘,从而使用为特定需求(性能优化或无缝集成现有的内部存储解决方案等)定制的驱动程序。 + +:::note + +Harvester 中的虚拟机 (VM) 镜像配置程序仍然依赖于 Longhorn。在 v1.2.0 版本之前,Harvester 只支持使用 Longhorn 来存储 VM 数据,不支持将外部存储作为 VM 数据的目标。 + +::: + +## 前提 + +为了使 Harvester 功能正常工作,第三方 CSI Driver 需要具备以下功能: +- 支持扩展 +- 支持快照 +- 支持克隆 +- 支持块设备 +- 支持 Read-Write-Many (RWX),用于 [实时迁移](../vm/live-migration.md) + +## 创建 Harvester 集群 + +Harvester 的操作系统遵循不可变设计,换言之,大多数操作系统文件在重启后会还原到预先配置的状态。因此,要使用第三方 CSI Driver,你需要在安装 Harvester 集群之前执行其他配置。 + +某些 CSI Driver 需要主机上有额外的持久路径。你可以将这些路径添加到 [`os.persistent_state_paths`](../install/harvester-configuration.md#ospersistent_state_paths)。 + +某些 CSI Driver 需要主机上有额外的软件包。你可以使用 [`os.after_install_chroot_commands`](../install/harvester-configuration.md#osafter_install_chroot_commands) 安装这些软件包。 + +:::note + +升级 Harvester 会导致 `after-install-chroot` 对操作系统所做的更改丢失。你还必须配置 `after-upgrade-chroot` 以使你的更改在升级过程中保留。升级 Harvester 之前,请参阅[运行时持久性更改](https://rancher.github.io/elemental-toolkit/docs/customizing/runtime_persistent_changes/)。 + +::: + +## 安装 CSI Driver + +Harvester 集群安装完成后,请参考[如何访问 Harvester 集群的 kubeconfig 文件](../faq.md#如何访问-harvester-集群的-kubeconfig-文件)获取集群的 kubeconfig。 + +通过 Harvester 集群的 kubeconfig,你可以按照每个 CSI Driver 的安装说明将第三方 CSI Driver 安装到集群中。你还必须参考 CSI Driver 文档在 Harvester 集群中创建 `StorageClass` 和 `VolumeSnapshotClass`。 + +## 配置 Harvester 集群 + +在使用 Harvester 的 **Backup & Snapshot** 功能之前,你需要通过 Harvester [csi-driver-config](../advanced/settings.md#csi-driver-config) 来进行一些基本配置。请按照以下步骤进行配置: + +1. 登录 Harvester UI,然后导航至 **Advanced** > **Settings**。 +1. 找到并选择 **csi-driver-config**,然后选择 **⋮** > **Edit Setting** 以访问配置选项。 +1. 将 **Provisioner** 设置为第三方 CSI Driver。 +1. 接下来,配置 **Volume Snapshot Class Name**。此设置指向用于创建卷快照或 VM 快照的 `VolumeSnapshotClass` 的名称。 +1. 同样,配置 **Backup Volume Snapshot Class Name**。这对应于负责创建 VM 备份的 `VolumeSnapshotClass` 的名称。 + +![csi-driver-config-external](/img/v1.2/advanced/csi-driver-config-external.png) + +## 使用 CSI Driver + +成功配置后,你可以使用第三方 StorageClass。你可以在创建空卷或向虚拟机添加新块卷时应用第三方 StorageClass,从而增强 Harvester 集群的存储能力。 + +完成配置后,你的 Harvester 集群就可以充分利用第三方存储集成了。 + +![rook-ceph-volume-external](/img/v1.2/advanced/rook-ceph-volume-external.png) + +![rook-ceph-vm-external](/img/v1.2/advanced/rook-ceph-vm-external.png) + +## 参考 + +- [在 Harvester 中使用 Rook Ceph 存储](https://harvesterhci.io/kb/using_rook_ceph_storage) \ No newline at end of file diff --git a/i18n/zh/docusaurus-plugin-content-docs/current/advanced/settings.md b/i18n/zh/docusaurus-plugin-content-docs/current/advanced/settings.md index 707acf8979b..c380d28132d 100644 --- a/i18n/zh/docusaurus-plugin-content-docs/current/advanced/settings.md +++ b/i18n/zh/docusaurus-plugin-content-docs/current/advanced/settings.md @@ -1,5 +1,4 @@ --- -id: index sidebar_position: 1 sidebar_label: 设置 title: "设置" @@ -33,6 +32,12 @@ SOME-CA-CERTIFICATES 此设置允许 Harvester 自动添加符合给定 glob 模式的磁盘作为虚拟机存储。 你可以使用逗号分隔来提供多个模式。 +:::note + +此设置仅能添加挂载到系统的格式化磁盘。 + +::: + :::caution - 此设置应用于集群中的**每个节点**。 @@ -122,6 +127,34 @@ https://172.16.0.1/v3/import/w6tp7dgwjj549l88pr7xmxb4x6m54v5kcplvhbp9vv2wzqrrjhr } ``` +## `csi-driver-config` + +_从 v1.2.0 起可用_ + +如果你在 Harvester 集群中安装了第三方 CSI Driver,在使用 **Backup & Snapshot** 相关功能之前,你必须通过此参数进行一些必要的配置。 + +默认: +``` +{ + "driver.longhorn.io": { + "volumeSnapshotClassName": "longhorn-snapshot", + "backupVolumeSnapshotClassName": "longhorn" + } +} +``` + +1. 为新添加的 CSI Driver 添加配置程序。 +1. 配置 **Volume Snapshot Class Name**,指用于创建卷快照或虚拟机快照的 `VolumeSnapshotClass` 的名称。 +1. 配置 **Backup Volume Snapshot Class Name**,指用于创建虚拟机备份的 `VolumeSnapshotClass` 的名称。 + +## `default-vm-termination-grace-period-seconds` + +_从 v1.2.0 起可用_ + +指定用于停止虚拟机的默认终止宽限期(以秒为单位)。 + +默认值:`120` + ## `http-proxy` 配置 HTTP 代理以访问外部服务,包括下载镜像和备份到 S3 服务。 @@ -186,6 +219,33 @@ Harvester 在用户配置的 `no-proxy` 后附加必要的地址,来确保内 debug ``` +## `ntp-servers` + +_从 v1.2.0 起可用_ + +配置 NTP 服务器以在 Harvester 节点上同步时间。 + +使用此设置,你可以在[安装](../install/harvester-configuration.md#osntp_servers)期间定义 NTP 服务器或在安装后更新 NTP 服务器。 + +:::caution + +修改 NTP 服务器将替换所有节点之前的值。 + +::: + +默认值:"" + +#### 示例 + +``` +{ + "ntpServers": [ + "0.suse.pool.ntp.org", + "1.suse.pool.ntp.org" + ] +} +``` + ## `overcommit-config` 配置 CPU、内存和存储的资源超售百分比。设置资源超售后,即使物理资源已经用完,也能调度额外的虚拟机。 @@ -313,45 +373,50 @@ IP 范围格式是 IPv4 CIDR,而且是集群节点数的 4 倍。 } ``` -## `ui-index` +## `support-bundle-image` -为 UI 配置 HTML 索引位置。 +_从 v1.2.0 起可用_ -默认值:`https://releases.rancher.com/harvester-ui/dashboard/latest/index.html` - -#### 示例 +此配置 Support Bundle 镜像,[rancher/support-bundle-kit](https://hub.docker.com/r/rancher/support-bundle-kit/tags) 提供了各种版本。 +默认: ``` -https://your.static.dashboard-ui/index.html +{ + "repository": "rancher/support-bundle-kit", + "tag": "v0.0.25", + "imagePullPolicy": "IfNotPresent" +} ``` -## `ui-plugin-index` +## `support-bundle-namespaces` -为 Harvester 插件配置 JS 地址 (从 Rancher 中访问 Harvester 时使用)。 +_从 v1.2.0 起可用_ -默认值:`https://releases.rancher.com/harvester-ui/plugin/harvester-latest/harvester-latest.umd.min.js` +在收集 Support Bundle 时指定其他命名空间。默认情况下,Support Bundle 只会从预定义的命名空间捕获资源。 -#### 示例 +预定义的命名空间列表如下: +- cattle-dashboards +- cattle-fleet-local-system +- cattle-fleet-system +- cattle-fleet-clusters-system +- cattle-monitoring-system +- fleet-local +- harvester-system +- local +- longhorn-system +- cattle-logging-system -``` -https://your.static.dashboard-ui/*.umd.min.js -``` - -## `ui-source` +如果你选择更多命名空间,它们将附加到预定义的命名空间列表中。 -配置如何加载 UI 源。 +默认值:none -你可以设置以下值: +## `support-bundle-timeout` -- `auto`:默认。自动检测是否使用绑定的 UI。 -- `external`:使用外部 UI 源。 -- `bundled`:使用绑定的 UI 源。 +_从 v1.2.0 起可用_ -#### 示例 +定义 Support Bundle 的默认超时时间(以分钟为单位)。使用 `0` 禁用超时功能。 -``` -external -``` +默认值:`10` ## `upgrade-checker-enabled` @@ -379,6 +444,8 @@ https://your.upgrade.checker-url/v99/checkupgrade ## `vip-pools` +_自 v1.2.0 起已弃用,请改用 [IP 池](../networking/ippool.md)_ + 使用 CIDR 或 IP 范围配置 VIP 的全局或命名空间 IP 地址池。 默认值:`{}` @@ -402,6 +469,10 @@ https://your.upgrade.checker-url/v99/checkupgrade 默认值:`{"enable":true, "period":300}` +:::note +主机不可用或断电时,虚拟机只会重启,不会迁移。 +::: + #### 示例 ```json @@ -410,3 +481,63 @@ https://your.upgrade.checker-url/v99/checkupgrade "period": 300 } ``` + +## UI 设置 + +### `branding` + +_从 v1.2.0 起可用_ + +用于通过修改 Harvester 产品名称、Logo 和配色方案来全局自定义 UI 界面。 + +默认:**Harvester** + +![containerd-registry](/img/v1.2/advanced/branding.png) + +你可以设置以下选项和值: + +- **Private Label**:此选项将大多数出现的 “Harvester” 替换为你提供的值。 +- **Logo**:上传深色和浅色的 Logo 来替换顶层导航标题中的 Harvester logo。 +- **Favicon**:上传一个网站图标来替换浏览器选项卡中的 Harvester 图标。 +- **Primary Color**:使用自定义颜色替换整个 UI 中使用的主颜色。 +- **Link Color**:使用自定义链接颜色替换整个 UI 中使用的链接颜色。 + +### `ui-index` + +为 UI 配置 HTML 索引位置。 + +默认值:`https://releases.rancher.com/harvester-ui/dashboard/latest/index.html` + +#### 示例 + +``` +https://your.static.dashboard-ui/index.html +``` + +### `ui-plugin-index` + +为 Harvester 插件配置 JS 地址 (从 Rancher 中访问 Harvester 时使用)。 + +默认值:`https://releases.rancher.com/harvester-ui/plugin/harvester-latest/harvester-latest.umd.min.js` + +#### 示例 + +``` +https://your.static.dashboard-ui/*.umd.min.js +``` + +### `ui-source` + +配置如何加载 UI 源。 + +你可以设置以下值: + +- `auto`:默认。自动检测是否使用绑定的 UI。 +- `external`:使用外部 UI 源。 +- `bundled`:使用绑定的 UI 源。 + +#### 示例 + +``` +external +``` diff --git a/i18n/zh/docusaurus-plugin-content-docs/current/advanced/storageclass.md b/i18n/zh/docusaurus-plugin-content-docs/current/advanced/storageclass.md index a979c9fc4c0..e072ce44b4e 100644 --- a/i18n/zh/docusaurus-plugin-content-docs/current/advanced/storageclass.md +++ b/i18n/zh/docusaurus-plugin-content-docs/current/advanced/storageclass.md @@ -6,6 +6,12 @@ title: "StorageClass" StorageClass 允许管理员描述存储的**类**。不同的 Longhorn StorageClass 可能会映射到集群管理员配置的不同的副本策略、不同的节点调度策略或不同的磁盘调度策略。这个概念在其他存储系统中也称为 **profiles**。 +:::note + +如需其他存储的支持,请参阅[第三方存储支持](../advanced/csidriver.md)。 + +::: + ## 创建 StorageClass 你可以从 **Advanced > StorageClasses** 页面创建一个或多个 StorageClass。 @@ -87,7 +93,6 @@ StorageClass 动态创建的卷将具有在类的 `reclaimPolicy` 字段中指 ![](/img/v1.2/storageclass/customize_tab_vol_binding_mode.png) - ## 附录 - 用例 ### HDD 场景 diff --git a/i18n/zh/docusaurus-plugin-content-docs/current/advanced/storagenetwork.md b/i18n/zh/docusaurus-plugin-content-docs/current/advanced/storagenetwork.md index 1b6592cab5d..586a87f8428 100644 --- a/i18n/zh/docusaurus-plugin-content-docs/current/advanced/storagenetwork.md +++ b/i18n/zh/docusaurus-plugin-content-docs/current/advanced/storagenetwork.md @@ -1,12 +1,12 @@ --- -sidebar_position: 3 +sidebar_position: 4 sidebar_label: 存储网络 title: "存储网络" --- Harvester 内置 Longhorn 作为存储系统,用于为 VM 和 Pod 提供块设备卷。如果用户希望将 Longhorn 复制流量与 Kubernetes 集群网络(即管理网络)或其它集群工作负载隔离开来,用户可以为 Longhorn 复制流量分配一个专用的存储网络来提高网络带宽和性能。 -有关更多信息,请参阅 [Longhorn存储网络](https://longhorn.io/docs/1.3.2/advanced-resources/deploy/storage-network/)。 +有关更多信息,请参阅 [Longhorn 存储网络](https://longhorn.io/docs/1.4.3/advanced-resources/deploy/storage-network/)。 :::note @@ -25,6 +25,7 @@ Harvester 内置 Longhorn 作为存储系统,用于为 VM 和 Pod 提供块设 - `kubectl get -A vmi` - 停止了连接到 Longhorn 卷的所有 Pod。 - 用户可以使用 Harvester 存储网络设置跳过此步骤。Harvester 将自动停止与 Longhorn 相关的 Pod。 +- 所有正在进行的镜像上传或下载操作都应已完成或删除。 :::caution @@ -48,8 +49,12 @@ kubectl apply -f https://raw.githubusercontent.com/harvester/harvester/v1.1.0/de - 如需更多信息,请参阅网络页面,然后配置 `Cluster Network` 和 `VLAN Config`,但不要配置 `Network`。 - 存储网络的 IP 范围 - IP 范围不能与 Kubernetes 集群网络冲突或重叠(`10.42.0.0/16`、`10.43.0.0/16`、`10.52.0.0/16` 和 `10.53.0.0/16` 是保留的)。 - - IP 范围格式是 IPv4 CIDR,而且是集群节点数的 4 倍。Longhorn 将为每个节点使用 2 个 IP,升级过程中会同时运行两个版本的 Longhorn。在升级过程中,每个节点将消耗 4 个 IP。 - - 如果你的集群有 250 个节点,则 IP 范围应大于 `/22`。 + - IP 范围应采用 IPv4 CIDR 格式,并且 Longhorn Pod 使用存储网络,如下所示: + - `instance-manger-e` 和 `instance-manager-r` Pod:每个节点需要 2 个 IP。升级过程中,这些 Pod 会存在两个版本(旧版本和新版本),升级成功后旧版本将被删除。 + - `backing-image-ds` pod:用于处理后台镜像数据源的动态上传和下载。镜像上传或下载完成后,这些 Pod 将被删除。 + - `backing-image-manager` Pod:每个磁盘 1 个 IP,类似于 instance manager Pod。升级过程中,两个版本将共存,升级完成后旧版本将被删除。 + - 所需的 IP 数量使用此公式计算:`所需的 IP 数量 = 节点数量 * 4 + 磁盘数量 * 2 + 要下载/上传的镜像数量`。 + - 例如,如果你的集群有 5 个节点,每个节点有 2 个磁盘,同时上传 10 个镜像,则 IP 范围应大于或等于 `/26` (`5 * 4 + 5 * 2 * 2 + 10 = 50`)。 我们将使用下面的配置为例来详细说明存储网络: @@ -142,7 +147,7 @@ value: '{"vlan":100,"clusterNetwork":"storage","range":"192.168.0.0/24"}' Harvester 还将创建一个新的 NetworkAttachmentDefinition 并更新 Longhorn Storage Network 设置。 -Longhorn 设置更新后,Longhorn 将重新启动所有 `instance-manager-r` 和 `instance-manager-e` 以应用新的网络配置,并且 Harvester 将重新启动 Pod。 +更新 Longhorn 设置后,Longhorn 将重启所有 `instance-manager-r`、`instance-manager-e` 和 `backing-image-manager` Pod 来应用新的网络配置,并且 Harvester 将重启 Pod。 :::note @@ -185,8 +190,43 @@ status: #### 步骤 2 -- 检查所有 Longhorn `instance-manager-e` 和 `instance-manager-r` 是否准备就绪以及网络是否正确。 -- 检查注释 `k8s.v1.cni.cncf.io/network-status` 是否具有名为 `lhnet1` 的接口并且 IP 地址在 IP 范围内。 +验证所有 Longhorn `instance-manager-e`、`instance-manager-r` 和 `backing-image-manager` Pod 的准备情况,并确认他们的网络配置正确。 + +执行以下命令来检查 Pod 的详细信息: + + +```bash +kubectl -n longhorn-system describe pod +``` + +如果你遇到类似以下的事件,则存储网络可能已耗尽其可用 IP: + +```bash +Events: + Type Reason Age From Message + ---- ------ ---- ---- ------- + .... + + Warning FailedCreatePodSandBox 2m58s kubelet Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for + sandbox "04e9bc160c4f1da612e2bb52dadc86702817ac557e641a3b07b7c4a340c9fc48": plugin type="multus" name="multus-cni-network" failed (add): [longhorn-system/ba +cking-image-ds-default-image-lxq7r/7d6995ee-60a6-4f67-b9ea-246a73a4df54:storagenetwork-sdfg8]: error adding container to network "storagenetwork-sdfg8": erro +r at storage engine: Could not allocate IP in range: ip: 172.16.0.1 / - 172.16.0.6 / range: net.IPNet{IP:net.IP{0xac, 0x10, 0x0, 0x0}, Mask:net.IPMask{0xff, +0xff, 0xff, 0xf8}} + + .... +``` + +请重新配置存储网络,使其具有足够的 IP 范围。 + +:::note + +如果存储网络的 IP 已用完,你在上传/下载镜像时可能会遇到相同的错误。请删除相关镜像并重新配置存储网络,使其具有足够的 IP 范围。 + +::: + +#### 步骤 3 + +检查 `k8s.v1.cni.cncf.io/network-status` 注释,确保存在名为 `lhnet1` 的接口,并且 IP 地址在指定的 IP 范围内。 用户可以使用以下命令来列出所有 Longhorn Instance Manager: diff --git a/i18n/zh/docusaurus-plugin-content-docs/current/airgap.md b/i18n/zh/docusaurus-plugin-content-docs/current/airgap.md index c4001db8f4d..64d7985e720 100644 --- a/i18n/zh/docusaurus-plugin-content-docs/current/airgap.md +++ b/i18n/zh/docusaurus-plugin-content-docs/current/airgap.md @@ -1,5 +1,4 @@ --- -id: airgap sidebar_position: 3 sidebar_label: 离线环境 title: "离线环境" diff --git a/i18n/zh/docusaurus-plugin-content-docs/current/api/sidebar.js b/i18n/zh/docusaurus-plugin-content-docs/current/api/sidebar.js deleted file mode 100644 index 1e25280b249..00000000000 --- a/i18n/zh/docusaurus-plugin-content-docs/current/api/sidebar.js +++ /dev/null @@ -1 +0,0 @@ -module.exports = [{"type":"doc","id":"api/harvester-apis"},{"type":"category","label":"Volumes","items":[{"type":"doc","id":"api/list-namespaced-persistent-volume-claim","label":"List Persistent Volume Claims","className":"api-method get"},{"type":"doc","id":"api/create-namespaced-persistent-volume-claim","label":"Create a Persistent Volume Claim","className":"api-method post"},{"type":"doc","id":"api/read-namespaced-persistent-volume-claim","label":"Read a Persistent Volume Claim","className":"api-method get"},{"type":"doc","id":"api/replace-namespaced-persistent-volume-claim","label":"Replace a Persistent Volume Claim","className":"api-method put"},{"type":"doc","id":"api/delete-namespaced-persistent-volume-claim","label":"Delete a Persistent Volume Claim","className":"api-method delete"},{"type":"doc","id":"api/patch-namespaced-persistent-volume-claim","label":"Patch a Persistent Volume Claim","className":"api-method patch"},{"type":"doc","id":"api/list-persistent-volume-claim-for-all-namespaces","label":"List Persistent Volume Claims in all Namespaces","className":"api-method get"}]},{"type":"category","label":"SSH Keys","items":[{"type":"doc","id":"api/list-key-pair-for-all-namespaces","label":"List Key Pairs in all Namespaces","className":"api-method get"},{"type":"doc","id":"api/list-namespaced-key-pair","label":"List Key Pairs","className":"api-method get"},{"type":"doc","id":"api/create-namespaced-key-pair","label":"Create a Key Pair","className":"api-method post"},{"type":"doc","id":"api/read-namespaced-key-pair","label":"Read a Key Pair","className":"api-method get"},{"type":"doc","id":"api/replace-namespaced-key-pair","label":"Replace a Key Pair","className":"api-method put"},{"type":"doc","id":"api/delete-namespaced-key-pair","label":"Delete a Key Pair","className":"api-method delete"},{"type":"doc","id":"api/patch-namespaced-key-pair","label":"Patch a Key Pair","className":"api-method patch"}]},{"type":"category","label":"Support Bundles","items":[{"type":"doc","id":"api/list-namespaced-support-bundle","label":"List Support Bundles","className":"api-method get"},{"type":"doc","id":"api/create-namespaced-support-bundle","label":"Create a Support Bundle","className":"api-method post"},{"type":"doc","id":"api/read-namespaced-support-bundle","label":"Read a Support Bundle","className":"api-method get"},{"type":"doc","id":"api/replace-namespaced-support-bundle","label":"Replace a Support Bundle","className":"api-method put"},{"type":"doc","id":"api/delete-namespaced-support-bundle","label":"Delete a Support Bundle","className":"api-method delete"},{"type":"doc","id":"api/patch-namespaced-support-bundle","label":"Patch a Support Bundle","className":"api-method patch"},{"type":"doc","id":"api/list-support-bundle-for-all-namespaces","label":"List Support Bundles in all Namespaces","className":"api-method get"}]},{"type":"category","label":"Upgrades","items":[{"type":"doc","id":"api/list-namespaced-upgrade","label":"List Upgrades","className":"api-method get"},{"type":"doc","id":"api/create-namespaced-upgrade","label":"Create an Upgrade","className":"api-method post"},{"type":"doc","id":"api/read-namespaced-upgrade","label":"Read an Upgrade","className":"api-method get"},{"type":"doc","id":"api/replace-namespaced-upgrade","label":"Replace an Upgrade","className":"api-method put"},{"type":"doc","id":"api/delete-namespaced-upgrade","label":"Delete an Upgrade","className":"api-method delete"},{"type":"doc","id":"api/patch-namespaced-upgrade","label":"Patch an Upgrade","className":"api-method patch"},{"type":"doc","id":"api/list-upgrade-for-all-namespaces","label":"List Upgrades in all Namespaces","className":"api-method get"}]},{"type":"category","label":"Backups","items":[{"type":"doc","id":"api/list-namespaced-virtual-machine-backup","label":"List Virtual Machine Backups","className":"api-method get"},{"type":"doc","id":"api/create-namespaced-virtual-machine-backup","label":"Create a Virtual Machine Backup","className":"api-method post"},{"type":"doc","id":"api/read-namespaced-virtual-machine-backup","label":"Read a Virtual Machine Backup","className":"api-method get"},{"type":"doc","id":"api/replace-namespaced-virtual-machine-backup","label":"Replace a Virtual Machine Backup","className":"api-method put"},{"type":"doc","id":"api/delete-namespaced-virtual-machine-backup","label":"Delete a Virtual Machine Backup","className":"api-method delete"},{"type":"doc","id":"api/patch-namespaced-virtual-machine-backup","label":"Patch a Virtual Machine Backup","className":"api-method patch"},{"type":"doc","id":"api/list-virtual-machine-backup-for-all-namespaces","label":"List Virtual Machine Backups in all Namespaces","className":"api-method get"}]},{"type":"category","label":"Images","items":[{"type":"doc","id":"api/list-namespaced-virtual-machine-image","label":"List Virtual Machine Images","className":"api-method get"},{"type":"doc","id":"api/create-namespaced-virtual-machine-image","label":"Create a Virtual Machine Image","className":"api-method post"},{"type":"doc","id":"api/read-namespaced-virtual-machine-image","label":"Read a Virtual Machine Image","className":"api-method get"},{"type":"doc","id":"api/replace-namespaced-virtual-machine-image","label":"Replace a Virtual Machine Image","className":"api-method put"},{"type":"doc","id":"api/delete-namespaced-virtual-machine-image","label":"Delete a Virtual Machine Image","className":"api-method delete"},{"type":"doc","id":"api/patch-namespaced-virtual-machine-image","label":"Patch a Virtual Machine Image","className":"api-method patch"},{"type":"doc","id":"api/list-virtual-machine-image-for-all-namespaces","label":"List Virtual Machine Images in all Namespaces","className":"api-method get"}]},{"type":"category","label":"Restores","items":[{"type":"doc","id":"api/list-namespaced-virtual-machine-restore","label":"List Virtual Machine Restores","className":"api-method get"},{"type":"doc","id":"api/create-namespaced-virtual-machine-restore","label":"Create a Virtual Machine Restore","className":"api-method post"},{"type":"doc","id":"api/read-namespaced-virtual-machine-restore","label":"Read a Virtual Machine Restore","className":"api-method get"},{"type":"doc","id":"api/replace-namespaced-virtual-machine-restore","label":"Replace a Virtual Machine Restore","className":"api-method put"},{"type":"doc","id":"api/delete-namespaced-virtual-machine-restore","label":"Delete a Virtual Machine Restore","className":"api-method delete"},{"type":"doc","id":"api/patch-namespaced-virtual-machine-restore","label":"Patch a Virtual Machine Restore","className":"api-method patch"},{"type":"doc","id":"api/list-virtual-machine-restore-for-all-namespaces","label":"List Virtual Machine Restores in all Namespaces","className":"api-method get"}]},{"type":"category","label":"Virtual Machine Templates","items":[{"type":"doc","id":"api/list-namespaced-virtual-machine-template","label":"List Virtual Machine Templates","className":"api-method get"},{"type":"doc","id":"api/create-namespaced-virtual-machine-template","label":"Create a Virtual Machine Template","className":"api-method post"},{"type":"doc","id":"api/read-namespaced-virtual-machine-template","label":"Read a Virtual Machine Template","className":"api-method get"},{"type":"doc","id":"api/replace-namespaced-virtual-machine-template","label":"Replace a Virtual Machine Template","className":"api-method put"},{"type":"doc","id":"api/delete-namespaced-virtual-machine-template","label":"Delete a Virtual Machine Template","className":"api-method delete"},{"type":"doc","id":"api/patch-namespaced-virtual-machine-template","label":"Patch a Virtual Machine Template","className":"api-method patch"},{"type":"doc","id":"api/list-namespaced-virtual-machine-template-version","label":"List Virtual Machine Template Versions","className":"api-method get"},{"type":"doc","id":"api/create-namespaced-virtual-machine-template-version","label":"Create a Virtual Machine Template Version","className":"api-method post"},{"type":"doc","id":"api/read-namespaced-virtual-machine-template-version","label":"Read a Virtual Machine Template Version","className":"api-method get"},{"type":"doc","id":"api/replace-namespaced-virtual-machine-template-version","label":"Replace a Virtual Machine Template Version","className":"api-method put"},{"type":"doc","id":"api/delete-namespaced-virtual-machine-template-version","label":"Delete a Virtual Machine Template Version","className":"api-method delete"},{"type":"doc","id":"api/patch-namespaced-virtual-machine-template-version","label":"Patch a Virtual Machine Template Version","className":"api-method patch"},{"type":"doc","id":"api/list-virtual-machine-template-for-all-namespaces","label":"List Virtual Machine Templates in all Namespaces","className":"api-method get"},{"type":"doc","id":"api/list-virtual-machine-template-version-for-all-namespaces","label":"List Virtual Machine Template Versions in all Namespaces","className":"api-method get"}]},{"type":"category","label":"Networks","items":[{"type":"doc","id":"api/list-namespaced-network-attachment-definition","label":"List Network Attachment Definitions","className":"api-method get"},{"type":"doc","id":"api/create-namespaced-network-attachment-definition","label":"Create a Network Attachment Definition","className":"api-method post"},{"type":"doc","id":"api/read-namespaced-network-attachment-definition","label":"Read a Network Attachment Definition","className":"api-method get"},{"type":"doc","id":"api/replace-namespaced-network-attachment-definition","label":"Replace a Network Attachment Definition","className":"api-method put"},{"type":"doc","id":"api/delete-namespaced-network-attachment-definition","label":"Delete a Network Attachment Definition","className":"api-method delete"},{"type":"doc","id":"api/patch-namespaced-network-attachment-definition","label":"Patch a Network Attachment Definition","className":"api-method patch"},{"type":"doc","id":"api/list-network-attachment-definition-for-all-namespaces","label":"List Network Attachment Definitions in all Namespaces","className":"api-method get"},{"type":"doc","id":"api/list-namespaced-cluster-network","label":"List Cluster Networks","className":"api-method get"},{"type":"doc","id":"api/create-namespaced-cluster-network","label":"Create a Cluster Network","className":"api-method post"},{"type":"doc","id":"api/read-namespaced-cluster-network","label":"Read a Cluster Network","className":"api-method get"},{"type":"doc","id":"api/replace-namespaced-cluster-network","label":"Replace a Cluster Network","className":"api-method put"},{"type":"doc","id":"api/delete-namespaced-cluster-network","label":"Delete a Cluster Network","className":"api-method delete"},{"type":"doc","id":"api/patch-namespaced-cluster-network","label":"Patch a Cluster Network","className":"api-method patch"},{"type":"doc","id":"api/list-namespaced-node-network","label":"List Node Networks","className":"api-method get"},{"type":"doc","id":"api/create-namespaced-node-network","label":"Create a Node Network","className":"api-method post"},{"type":"doc","id":"api/read-namespaced-node-network","label":"Read a Node Network","className":"api-method get"},{"type":"doc","id":"api/replace-namespaced-node-network","label":"Replace a Node Network","className":"api-method put"},{"type":"doc","id":"api/delete-namespaced-node-network","label":"Delete a Node Network","className":"api-method delete"},{"type":"doc","id":"api/patch-namespaced-node-network","label":"Patch a Node Network","className":"api-method patch"}]},{"type":"category","label":"Migrations","items":[{"type":"doc","id":"api/list-namespaced-virtual-machine-instance-migration","label":"List Virtual Machine Instance Migrations","className":"api-method get"},{"type":"doc","id":"api/create-namespaced-virtual-machine-instance-migration","label":"Create a Virtual Machine Instance Migration","className":"api-method post"},{"type":"doc","id":"api/read-namespaced-virtual-machine-instance-migration","label":"Read a Virtual Machine Instance Migration","className":"api-method get"},{"type":"doc","id":"api/replace-namespaced-virtual-machine-instance-migration","label":"Replace a Virtual Machine Instance Migration","className":"api-method put"},{"type":"doc","id":"api/delete-namespaced-virtual-machine-instance-migration","label":"Delete a Virtual Machine Instance Migration","className":"api-method delete"},{"type":"doc","id":"api/patch-namespaced-virtual-machine-instance-migration","label":"Patch a Virtual Machine Instance Migration","className":"api-method patch"},{"type":"doc","id":"api/list-virtual-machine-instance-migration-for-all-namespaces","label":"List Virtual Machine Instance Migrations in all Namespaces","className":"api-method get"}]},{"type":"category","label":"Virtual Machines","items":[{"type":"doc","id":"api/list-namespaced-virtual-machine-instance","label":"List Virtual Machine Instances","className":"api-method get"},{"type":"doc","id":"api/read-namespaced-virtual-machine-instance","label":"Read a Virtual Machine Instance","className":"api-method get"},{"type":"doc","id":"api/list-namespaced-virtual-machine","label":"List Virtual Machines","className":"api-method get"},{"type":"doc","id":"api/create-namespaced-virtual-machine","label":"Create a Virtual Machine","className":"api-method post"},{"type":"doc","id":"api/read-namespaced-virtual-machine","label":"Read a Virtual Machine","className":"api-method get"},{"type":"doc","id":"api/replace-namespaced-virtual-machine","label":"Replace a Virtual Machine","className":"api-method put"},{"type":"doc","id":"api/delete-namespaced-virtual-machine","label":"Delete a Virtual Machine","className":"api-method delete"},{"type":"doc","id":"api/patch-namespaced-virtual-machine","label":"Patch a Virtual Machine","className":"api-method patch"},{"type":"doc","id":"api/list-virtual-machine-instance-for-all-namespaces","label":"List Virtual Machine Instances in all Namespaces","className":"api-method get"},{"type":"doc","id":"api/list-virtual-machine-for-all-namespaces","label":"List Virtual Machines in all Namespaces","className":"api-method get"}]}]; \ No newline at end of file diff --git a/i18n/zh/docusaurus-plugin-content-docs/current/authentication.md b/i18n/zh/docusaurus-plugin-content-docs/current/authentication.md index 73f4958633e..1d3d11002e7 100644 --- a/i18n/zh/docusaurus-plugin-content-docs/current/authentication.md +++ b/i18n/zh/docusaurus-plugin-content-docs/current/authentication.md @@ -1,5 +1,4 @@ --- -id: authentication sidebar_position: 5 sidebar_label: 身份验证 title: "身份验证" diff --git a/i18n/zh/docusaurus-plugin-content-docs/current/faq.md b/i18n/zh/docusaurus-plugin-content-docs/current/faq.md index b2ef64f6829..b06f76b0d35 100644 --- a/i18n/zh/docusaurus-plugin-content-docs/current/faq.md +++ b/i18n/zh/docusaurus-plugin-content-docs/current/faq.md @@ -1,5 +1,4 @@ --- -id: faq sidebar_position: 17 sidebar_label: 常见问题 title: "常见问题" @@ -49,3 +48,57 @@ New password for default administrator (user-xxxxx): ### 我添加了一个带分区的磁盘。为什么没有被检测到? 从 Harvester v1.0.2 开始,我们不再支持添加其他分区磁盘,因此请务必先删除所有分区(例如,使用 `fdisk`)。 + +### 为什么有些 Harvester Pod 会变成 ErrImagePull/ImagePullBackOff? + +可能是因为你的 Harvester 集群是离线的,并且缺少某些预加载的容器镜像。Kubernetes 有可以对膨胀镜像存储进行垃圾收集的机制。当存储容器镜像的分区存储超过 85% 时,`kubelet` 会尝试根据上次使用镜像的时间来修剪镜像(从最旧的镜像开始),直到占用率再次低于 80%。这些数字(85%/80%)是 Kubernetes 的默认高/低阈值。 + +要从此状态恢复,请根据集群的配置执行以下操作之一: +- 从集群外部的源中拉取丢失的镜像(如果是离线环境,你可能需要事先设置 HTTP 代理)。 +- 手动从 Harvester ISO 镜像导入镜像。 + +:::note + +以 v1.1.2 为例,从官方网址下载 Harvester ISO 镜像。然后从 ISO 镜像中提取镜像列表,从而决定我们要导入哪个镜像 tarball。例如,如果要导入缺少的容器镜像 `rancher/harvester-upgrade`: + +```shell +$ curl -sfL https://releases.rancher.com/harvester/v1.1.2/harvester-v1.1.2-amd64.iso -o harvester.iso + +$ xorriso -osirrox on -indev harvester.iso -extract /bundle/harvester/images-lists images-lists + +$ grep -R "rancher/harvester-upgrade" images-lists/ +images-lists/harvester-images-v1.1.2.txt:docker.io/rancher/harvester-upgrade:v1.1.2 +``` + +找出镜像 tarball 的位置,并将其从 ISO 镜像中提取。解压缩提取的 zstd 镜像 tarball。 + +```shell +$ xorriso -osirrox on -indev harvester.iso -extract /bundle/harvester/images/harvester-images-v1.1.2.tar.zst harvester.tar.zst + +$ zstd -d --rm harvester.tar.zst +``` + +将镜像 tarball 上传到需要恢复的 Harvester 节点。最后,执行以下命令在每个节点上导入容器镜像。 + +```shell +$ ctr -n k8s.io images import harvester.tar +$ rm harvester.tar +``` + +::: + +- 参考其他节点找到该节点丢失的镜像,然后从仍具有该镜像的节点导出镜像,并将镜像导入到丢失镜像的节点上。 + +为了防止这种情况发生,如果镜像存储磁盘空间紧张,我们建议在每次成功升级 Harvester 后清理以前版本中未使用的容器镜像。我们提供了一个 [harv-purge-images 脚本](https://github.com/harvester/upgrade-helpers/blob/main/bin/harv-purge-images.sh),可用于轻松清理磁盘空间,特别容器镜像存储。该脚本必须在每个 Harvester 节点上执行。例如,如果原来是 v1.1.2 的集群现在升级到了 v1.2.0,你可以执行以下操作来丢弃仅在 v1.1.2 中使用但在 v1.2.0 中不再需要的容器镜像: + +```shell +# on each node +$ ./harv-purge-images.sh v1.1.2 v1.2.0 +``` + +:::caution + +- 该脚本仅下载镜像列表并比较两者以计算两个版本之间的差异。它不与集群通信,因此不知道集群是从哪个版本升级的。 +- 我们发布了自 v1.1.0 以来每个版本的镜像列表。对于 v1.1.0 之前的集群,你需要手动清理旧镜像。 + +::: diff --git a/i18n/zh/docusaurus-plugin-content-docs/current/host/_category_.json b/i18n/zh/docusaurus-plugin-content-docs/current/host/_category_.json index b1e8cce789f..a7e06c84020 100644 --- a/i18n/zh/docusaurus-plugin-content-docs/current/host/_category_.json +++ b/i18n/zh/docusaurus-plugin-content-docs/current/host/_category_.json @@ -4,7 +4,7 @@ "collapsible": false, "collapsed": false, "link": { - "type": "doc", - "id": "host-management" - } + "type": "doc", + "id": "host" + } } \ No newline at end of file diff --git a/i18n/zh/docusaurus-plugin-content-docs/current/host/host.md b/i18n/zh/docusaurus-plugin-content-docs/current/host/host.md index 6e3f2c6ca6b..36c3d66e7ff 100644 --- a/i18n/zh/docusaurus-plugin-content-docs/current/host/host.md +++ b/i18n/zh/docusaurus-plugin-content-docs/current/host/host.md @@ -1,5 +1,4 @@ --- -id: host-management sidebar_position: 1 sidebar_label: 主机管理 title: "主机管理" @@ -93,7 +92,7 @@ Admin 用户可以点击 **Enable Maintenance Mode** 来自动驱逐节点中所 :::note -如果你在 QEMU 环境中测试 Harvester,你需要使用 QEMU v6.0 或更高版本。以前版本的 QEMU 将始终为 NVMe 磁盘模拟生成相同的 WWN,这将导致 Harvester 不添加其他磁盘。 +如果你在 QEMU 环境中测试 Harvester,你需要使用 QEMU v6.0 或更高版本。以前版本的 QEMU 将始终为 NVMe 磁盘模拟生成相同的 WWN,这将导致 Harvester 不添加其他磁盘。但是,你仍然可以使用 SCSI 控制器来添加虚拟磁盘。WWN 信息可以与磁盘附加操作一起手动添加。有关更多详情,请参阅[脚本](https://github.com/harvester/vagrant-rancherd/blob/2782981b6017754d016f5b72d630dff4895f7ad6/scripts/attach-disk.sh#L75)。 ::: diff --git a/i18n/zh/docusaurus-plugin-content-docs/current/index.md b/i18n/zh/docusaurus-plugin-content-docs/current/index.md index 4e3cb6033c7..b0d797b3523 100644 --- a/i18n/zh/docusaurus-plugin-content-docs/current/index.md +++ b/i18n/zh/docusaurus-plugin-content-docs/current/index.md @@ -1,5 +1,4 @@ --- -id: overview sidebar_position: 1 sidebar_label: Harvester 介绍 slug: / @@ -25,7 +24,7 @@ Harvester 架构由尖端的开源技术组成: - **建立在 Kubernetes 之上**。[Kubernetes](https://kubernetes.io/) 已成为主流的基础架构语言,而 Harvester 是包含 Kubernetes 的 HCI 解决方案。 - **使用 Kubevirt 进行虚拟化管理**。[Kubevirt](https://kubevirt.io/) 在 Kubernetes 之上使用 KVM 来提供虚拟化管理。 - **使用 Longhorn 进行存储管理**。[Longhorn](https://longhorn.io/) 提供分布式块存储和分层。 -- **通过 Grafana 和 Prometheus 进行观察**。[Granfana](https://grafana.com/) 和 [Prometheus](https://prometheus.io/) 提供强大的监控和记录功能。作为可启动的设备镜像提供, +- **通过 Grafana 和 Prometheus 进行观察**。[Grafana](https://grafana.com/) 和 [Prometheus](https://prometheus.io/) 提供强大的监控和记录功能。作为可启动的设备镜像提供, ![](/img/v1.2/architecture.svg) diff --git a/i18n/zh/docusaurus-plugin-content-docs/current/install/harvester-configuration.md b/i18n/zh/docusaurus-plugin-content-docs/current/install/harvester-configuration.md index 94169267f12..937928dc016 100644 --- a/i18n/zh/docusaurus-plugin-content-docs/current/install/harvester-configuration.md +++ b/i18n/zh/docusaurus-plugin-content-docs/current/install/harvester-configuration.md @@ -58,7 +58,8 @@ install: hwAddr: "B8:CA:3A:6A:64:7C" method: dhcp force_efi: true - device: /dev/vda + device: /dev/sda + data_disk: /dev/sdb silent: true iso_url: http://myserver/test.iso poweroff: true @@ -70,10 +71,21 @@ install: vip_mode: dhcp force_mbr: false addons: - rancher-monitoring: + harvester_vm_import_controller: + enabled: false + values_content: "" + harvester_pcidevices_controller: + enabled: false + values_content: "" + rancher_monitoring: enabled: true - rancher-logging: + values_content: "" + rancher_logging: enabled: false + values_content: "" + harvester_seeder: + enabled: false + values_content: "" system_settings: auto-disk-provision-paths: "" ``` @@ -204,6 +216,61 @@ os: path: /etc/crontab ``` +### `os.persistent_state_paths` + +#### 定义 + +`os.persistent_state_paths` 选项用于配置自定义路径,对文件所做的修改将在重启后保留。对这些路径中的文件所做的任何更改都不会在重启后丢失。 + +#### 示例 + +请参阅以下示例配置在 Harvester 中安装 `rook-ceph`: + +```yaml +os: + persistent_state_paths: + - /var/lib/rook + - /var/lib/ceph + modules: + - rbd + - nbd +``` + +### `os.after_install_chroot_commands` + +#### 定义 + +你可以使用 `after_install_chroot_commands` 添加其他软件包。[elemental-toolkit](https://rancher.github.io/elemental-toolkit/docs/) 提供的 `after-install-chroot` 阶段允许你执行不受文件系统写入限制的命令,确保系统重启后用户定义的命令能够保留。 + +#### 示例 + +请参阅以下示例配置,在 Harvester 中安装 RPM 包: + +```yaml +os: + after_install_chroot_commands: + - rpm -ivh + +``` + +DNS 解析在 `after-install-chroot 阶段`不可用,并且 `nameserver` 可能不可用。如果需要通过 URL 访问域名来安装包,请先创建一个临时的 `/etc/resolv.conf` 文件。例如: + +```yaml +os: + after_install_chroot_commands: + - "rm -f /etc/resolv.conf && echo 'nameserver 8.8.8.8' | sudo tee /etc/resolv.conf" + - "mkdir /usr/local/bin" + - "curl -fsSL -o get_helm.sh https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3 && chmod 700 get_helm.sh && ./get_helm.sh" + - "rm -f /etc/resolv.conf && ln -s /var/run/netconfig/resolv.conf /etc/resolv.conf" +``` + + +:::note + +升级 Harvester 会导致 `after-install-chroot` 对操作系统所做的更改丢失。你还必须配置 `after-upgrade-chroot` 以使你的更改在升级过程中保留。升级 Harvester 之前,请参阅[运行时持久性更改](https://rancher.github.io/elemental-toolkit/docs/customizing/runtime_persistent_changes/)。 + +::: + ### `os.hostname` #### 定义 @@ -416,6 +483,8 @@ install: 用于安装操作系统的设备。 +如果你的机器通过 PXE 安装包含了多个物理存储设备,最好使用 `/dev/disk/by-id/$id` 或 `/dev/disk/by-path/$path` 来指定存储设备。 + ### `install.silent` 保留。 @@ -524,6 +593,8 @@ _从 v1.0.1 起可用_ 设置默认存储设备来存储 VM 数据。 +如果你的机器通过 PXE 安装包含了多个物理存储设备,最好使用 `/dev/disk/by-id/$id` 或 `/dev/disk/by-path/$path` 来指定存储设备。 + 默认值:与 [`install.device`](#installdevice) 设置的存储设备相同 #### 示例 @@ -548,18 +619,100 @@ _从 v1.2.0 起可用_ ```yaml install: addons: - rancher-monitoring: + rancher_monitoring: enabled: true - rancher-logging: + rancher_logging: enabled: false ``` -Harvester v1.2.0 附带了四个插件: +Harvester v1.2.0 附带了五个插件: -- harvester-vm-import-controller -- harvester-pcidevices-controller +- vm-import-controller (chartName: `harvester-vm-import-controller`) +- pcidevices-controller (chartName: `harvester-pcidevices-controller`) - rancher-monitoring - rancher-logging +- harvester-seeder (experimental) + +### `install.harvester.storage_class.replica_count` + +_从 v1.1.2 起可用_ + +#### 定义 + +设置 Harvester 默认存储类 `harvester-longhorn` 的副本数。 + +默认值:3 + +支持值:1、2、3。所有其他值均视为 3。 + +在边缘场景中,用户可能部署单节点 Harvester 集群,因此可以将该值设置为 1。在大多数场景下,为了实现存储高可用,建议你保留默认值 3。 + +有关更多信息,请参阅 [longhorn-replica-count](https://longhorn.io/docs/1.4.1/references/settings/#default-replica-count)。 + +#### 示例 + +```yaml +install: + harvester: + storage_class: + replica_count: 1 +``` + +### `install.harvester.longhorn.default_settings.guaranteedEngineManagerCPU` + +_从 v1.2.0 起可用_ + +#### 定义 + +在每个节点上为每个 Longhorn engine manager Pod 预留的可分配 CPU 总量默认百分比。 + +默认值:12 + +支持值:0-12。所有其他值均视为 12。 + +该整数值表示要在各个节点上为每个 engine manager Pod 保留的可分配 CPU 总数百分比。 + +在边缘场景中,用户可能部署单节点 Harvester 集群,因此可以将该参数设置为小于 12 的值。在大多数场景下,为了实现系统高可用,建议保留默认值。 + +在设置该值之前,请参阅 [longhorn-guaranteed-engine-manager-cpu](https://longhorn.io/docs/1.4.1/references/settings/#guaranteed-engine-manager-cpu) 了解更多详细信息。 + +#### 示例 + +```yaml +install: + harvester: + longhorn: + default_settings: + guaranteedEngineManagerCPU: 6 +``` + +### `install.harvester.longhorn.default_settings.guaranteedReplicaManagerCPU` + +_从 v1.2.0 起可用_ + +#### 定义 + +在每个节点上为每个 Longhorn replica manager Pod 预留的可分配 CPU 总量默认百分比。 + +默认值:12 + +支持值:0-12。所有其他值均视为 12。 + +该整数值表示要在各个节点上为每个 replica manager Pod 保留的可分配 CPU 总数百分比。 + +在边缘场景中,用户可能部署单节点 Harvester 集群,在这种情况下可以将该参数设置为小于 12 的值。在大多数场景下,为了实现系统高可用,建议保留默认值。 + +在设置该值之前,请参阅 [longhorn-guaranteed-replica-manager-cpu](https://longhorn.io/docs/1.4.1/references/settings/#guaranteed-replica-manager-cpu) 了解更多详细信息。 + +#### 示例 + +```yaml +install: + harvester: + longhorn: + default_settings: + guaranteedReplicaManagerCPU: 6 +``` ### `system_settings` diff --git a/i18n/zh/docusaurus-plugin-content-docs/current/install/install-binaries-mode.md b/i18n/zh/docusaurus-plugin-content-docs/current/install/install-binaries-mode.md new file mode 100644 index 00000000000..7cd7ab0480d --- /dev/null +++ b/i18n/zh/docusaurus-plugin-content-docs/current/install/install-binaries-mode.md @@ -0,0 +1,45 @@ +--- +sidebar_position: 8 +sidebar_label: 仅安装 Harvester 二进制文件 +title: "仅安装 Harvester 二进制文件" +keywords: + - Harvester + - harvester + - Rancher + - rancher + - ISO 安装 +Description: 如果需要获取 Harvester ISO,访问 GitHub 上的发行版本进行下载。安装过程中,你可以选择仅安装二进制文件。 +--- + +_从 v1.2.0 起可用_ + +`Install Harvester binaries only` 模式允许你仅安装和配置二进制文件,是云和边缘用例的理想选择。 + +![choose-installation-mode.png](/img/v1.2/install/choose-installation-mode.png) + +### 背景 +目前,启动新的 Harvester 节点时,它需要成为集群中的第一个节点或加入现有集群。 +当你已经足够了解安装 Harvester 节点的环境时,这两种模式非常有用。 +但是,对于裸机云提供商和边缘等用例,这些安装模式会将操作系统和 Harvester 内容加载到节点,但你无法配置网络。而且,K8s 和网络配置将不会被应用。 + +如果选择 `Install Harvester binaries only`,你需要在首次启动后执行其他配置: + +- Harvester 的创建/加入选项 +- 管理网络接口详细信息 +- 集群令牌 +- 节点密码 + +然后,安装程序将应用端点配置并启动 Harvester。你无需进一步重新启动。 + +### 流盘模式 +Harvester 发布了预装 Harvester 的原始镜像工件。Harvester 安装程序现在支持将预安装的镜像直接流式传输到磁盘,从而更好地与云提供商集成。 + +在 `Equinix Metal`上,你可以使用以下内核参数来使用流模式: + +``` +ip=dhcp net.ifnames=1 rd.cos.disable rd.noverifyssl root=live:http://${artifactEndpoint}/harvester-v1.2.0-rootfs-amd64.squashfs harvester.install.automatic=true harvester.scheme_version=1 harvester.install.device=/dev/vda harvester.os.password=password harvester.install.raw_disk_image_path=http://${artifactEndpoint}/harvester-v1.2.0-amd64.raw harvester.install.mode=install console=tty1 harvester.install.tty=tty1 harvester.install.config_url=https://metadata.platformequinix.com/userdata harvester.install.management_interface.interfaces="name:enp1s0" harvester.install.management_interface.method=dhcp harvester.install.management_interface.bond_options.mode=balance-tlb harvester.install.management_interface.bond_options.miimon=100 +``` + +:::note +流式传输到磁盘时,建议将原始磁盘工件托管在更靠近目标的位置,因为原始磁盘工件接近 16G。 +::: \ No newline at end of file diff --git a/i18n/zh/docusaurus-plugin-content-docs/current/install/iso-install.md b/i18n/zh/docusaurus-plugin-content-docs/current/install/iso-install.md index 4b30c2cc1fc..62b0a1605d6 100644 --- a/i18n/zh/docusaurus-plugin-content-docs/current/install/iso-install.md +++ b/i18n/zh/docusaurus-plugin-content-docs/current/install/iso-install.md @@ -1,5 +1,4 @@ --- -id: index sidebar_position: 2 sidebar_label: ISO 安装 title: "ISO 安装" @@ -12,78 +11,107 @@ keywords: Description: 如果需要获取 Harvester ISO,访问 GitHub 上的发行版本进行下载。在安装过程中,你可以选择组建一个新的集群,或者将节点加入到现有的集群中。 --- -## 安装步骤 -如果需要获取 Harvester ISO 镜像,访问 [GitHub 上的发行版本](https://github.com/harvester/harvester/releases)进行下载。 +Harvester 作为可启动的设备镜像提供,你可以使用 ISO 镜像将其直接安装在裸机服务器上。要获取 ISO 镜像,请从 [Harvesterreleases](https://github.com/harvester/harvester/releases) 页面下载 **💿harvester-v1.x.x-amd64.iso**。 -在安装过程中,你可以选择组建一个新的集群,或者将节点加入到现有的集群中。 +在安装过程中,你可以选择**创建新的 Harvester 集群**或**将节点加入现有的 Harvester 集群**。 -注意:这个[视频](https://youtu.be/X0VIGZ_lExQ)概述了 ISO 安装的过程。 +以下[视频](https://youtu.be/X0VIGZ_lExQ)概述了 ISO 安装的过程。
- +
-1. 挂载 Harvester ISO 磁盘并通过选择 `Harvester Installer` 来启动服务器。 +## 安装步骤 + +1. 挂载 Harvester ISO 文件并通过选择 `Harvester Installer` 来启动服务器。 + ![iso-install.png](/img/v1.2/install/iso-install.png) -1. 选择一个安装模式: - - 创建一个新的 Harvester 集群。 - - 注意:默认情况下,第一个节点将是集群的管理节点。当有 3 个节点时,首先添加的另外 2 个节点会自动提升为管理节点,从而形成 HA 集群。 +1. 使用箭头键选择安装模式。默认情况下,第一个节点将是集群的管理节点。 - 如果你想提升其它地区的管理节点,你可以在执行自定义主机的步骤时提供一个 [Harvester 配置](./harvester-configuration.md)的 URL,在 [os.labels](./harvester-configuration.md#oslabels) 中添加节点标签 `topology.kubernetes.io/zone`。在这种情况下,至少需要三个不同的地区。 + ![choose-installation-mode.png](/img/v1.2/install/choose-installation-mode.png) - - 加入一个现有的 Harvester 集群。 + - `Create a new Harvester cluster`:创建一个全新的 Harvester 集群。 - - 仅安装 Harvester 系统文件。 - - 注意:如果选择 `Install Harvester binaries only` ,则需要在首次启动后进行额外的设置。 - -1. 选择要安装 Harvester 集群的设备。 - ![select-disk.png](/img/v1.2/install/select-disk.png) - - 注意:默认情况下,Harvester 对 UEFI 和 BIOS 使用 [GPT](https://en.wikipedia.org/wiki/GUID_Partition_Table) 分区表。如果你使用 BIOS 引导,则可以选择 [MBR](https://en.wikipedia.org/wiki/Master_boot_record)。 - - 当你的机器仅安装了一块磁盘,或使用相同的磁盘来存储操作系统和 VM 数据时,你需要配置持久分区大小,用于存储系统软件包和容器镜像,其默认值和最小值均为 150 GB。 + - `Join an existing Harvester cluster`:加入现有的 Harvester 集群。你需要要加入的集群的 VIP 和集群 Token。 + + - `Install Harvester binaries only`:如果选择此选项,则首次启动后需要进行其他设置。 :::info + 当有 3 个节点时,首先添加的另外 2 个节点会自动提升为管理节点,从而形成 HA 集群。如果你想提升其它地区的管理节点,你可以在执行自定义主机的步骤时提供一个 [Harvester 配置](./harvester-configuration.md)的 URL,在 [os.labels](./harvester-configuration.md#oslabels) 中添加节点标签 `topology.kubernetes.io/zone`。在这种情况下,至少需要三个不同的地区。 + ::: - 建议选择一个单独的磁盘来存储 VM 数据。 +1. 选择要安装 Harvester 集群的安装磁盘和要存储 VM 数据的数据磁盘。默认情况下,Harvester 对 UEFI 和 BIOS 使用 [GUID 分区表 (GPT)](https://en.wikipedia.org/wiki/GUID_Partition_Table) 分区架构。如果你使用 BIOS 启动,则可以选择 [Master boot record (MBR)](https://en.wikipedia.org/wiki/Master_boot_record)。 - ::: + ![choose-installation-target-data-disk.png](/img/v1.2/install/choose-installation-target-data-disk.png) + + - `Installation disk`:安装 Harvester 集群的磁盘。 + - `Data disk`:存储虚拟机数据的磁盘。建议选择单独的磁盘来存储虚拟机数据。 + - `Persistent size`:如果你只有一个磁盘,或者使用同一个磁盘来存储操作系统和虚拟机数据,则需要配置持久化分区大小来存储系统包和容器镜像。默认最小持久分区大小为 150 GiB。你可以使用 200Gi 或 153600Mi 格式来指定大小。 + +1. 配置节点的 `HostName`。 -1. 为这个节点配置一个主机名。 ![config-hostname.png](/img/v1.2/install/config-hostname.png) -1. 选择管理网络的网络接口。默认情况下,Harvester 将创建一个名为 `mgmt-bo` 的 Bond NIC,IP 地址可以通过 DHCP 进行配置或静态分配。 +1. 配置管理网络的网络接口。默认情况下,Harvester 创建一个名为 `mgmt-bo` 的 bond NIC,IP 地址可以通过 DHCP 配置或静态分配。 + + :::note + 在 Harvester 集群的整个生命周期中都无法更改节点 IP。如果你使用 DHCP,则必须确保 DHCP 服务器始终为同一节点提供相同的 IP。如果节点 IP 发生变化,相关节点将无法加入集群,甚至可能破坏集群。 + ::: + ![config-network.png](/img/v1.2/install/config-network.png) - - 注意:节点 IP 在Harvester 集群的生命周期中不可更改。如果使用了 DHCP,用户必须确保 DHCP 服务器始终为同一个节点提供相同的 IP。如果节点 IP 发生变化,相关节点将无法加入集群,甚至可能破坏集群。 -1. (可选)配置 DNS 服务器。使用逗号作为分隔符。 +1. (可选)配置 `DNS Servers`。使用逗号作为分隔符来添加更多 DNS 服务器。要使用默认 DNS 服务器,将其留空。 -1. 配置用于访问集群或加入集群中其他节点的 `Virtual IP`。 - - 注意:如果你的 IP 地址是通过 DHCP 配置的,则需要在 DHCP 服务器上配置静态 MAC 到 IP 地址的映射,以便拥有持久的 Virtual IP,VIP 必须与所有节点 IP 都不一样。 + ![config-dns-server.png](/img/v1.2/install/config-dns-server.png) + +1. 选择 `VIP Mode` 以配置虚拟 IP (VIP)。该 VIP 用于访问集群或让其他节点加入集群。 + + :::note + 如果使用了 DHCP 配置 IP 地址,你需要在 DHCP 服务器上配置静态 MAC 到 IP 地址映射,从而获得持久性的虚拟 IP (VIP),并且 VIP 必须是唯一的。 + ::: -1. 配置 `cluster token`。这个 Token 会用于将其他节点添加到集群中。 + ![config-virtual-ip.png](/img/v1.2/install/config-virtual-ip.png) -1. 为主机配置登录密码。默认的 SSH 用户是 `rancher`。 +1. 配置 `Cluster token`。这个 Token 用于将其他节点添加到集群中。 -1. 建议配置 NTP 服务器以确保所有节点的时间同步。默认值是 `0.suse.pool.ntp.org`。 + ![config-cluster-token.png](/img/v1.2/install/config-cluster-token.png) -1. (可选)如果你需要使用 HTTP 代理来访问外部环境,在此处输入代理的 URL。否则,请留空。 +1. 配置并确认用于访问节点的 `Password`。默认的 SSH 用户是 `rancher`。 -1. (可选)你可以从远端服务器 URL 导入 SSH 密钥。你的 GitHub 公钥可以与 `https://github.com/.keys` 一起使用。 + ![config-password.png](/img/v1.2/install/config-password.png) -1. (可选)如果你需要使用 [Harvester 配置文件](./harvester-configuration.md)来自定义主机,在此处输入 HTTP URL。 +1. 配置 `NTP服务器` 以确保所有节点的时间同步。默认为 `0.suse.pool.ntp.org`。使用逗号作为分隔符来添加更多 NTP 服务器。 -1. 确认安装选项后,Harvester 会安装到你的主机上。安装过程可能需要几分钟。 + ![config-ntp-server.png](/img/v1.2/install/config-ntp-server.png) -1. 主机会在安装完成后重启。重启后,包含管理 URL 和状态的 Harvester 控制台会显示。你可以使用 `F12` 从 Harvester 控制台切换到 Shell,键入 `exit` 回到 Harvester 控制台。 - - 注意:如果你在第一页选择了 `Install Harvester binaries only`,则需要在首次启动后进行额外的设置。 +1. (可选)如果你需要使用 HTTP 代理来访问外部环境,请输入 `Proxy address`。否则,请留空。 + + ![config-proxy.png](/img/v1.2/install/config-proxy.png) + +1. (可选)你可以选择通过提供 `HTTP URL` 导入 SSH 密钥。例如,使用你的 GitHub 公钥 `https://github.com/.keys`。 + + ![import-ssh-keys.png](/img/v1.2/install/import-ssh-keys.png) + +1. (可选)如果你需要使用 [Harvester 配置文件](./harvester-configuration.md)来自定义主机,在此处输入 `HTTP URL`。 + + ![remote-config.png](/img/v1.2/install/remote-config.png) + +1. 检查并确认你的安装选项。确认安装选项后,Harvester 会安装到你的主机上。安装可能需要几分钟。 + + ![confirm-install.png](/img/v1.2/install/confirm-install.png) + +1. 安装完成后,你的节点将重启。重启后,Harvester 控制台将显示管理 URL 和状态。网页界面的默认 URL 是 `https://your-virtual-ip`。你可以使用 `F12` 从 Harvester 控制台切换到 Shell,然后键入 `exit` 返回到 Harvester 控制台。 + + :::note + 如果你在第一页选择了 `Install Harvester binaries only`,则需要在首次启动后进行额外的设置。 + ::: -1. 网页界面的默认 URL 是 `https://your-virtual-ip`。 ![iso-installed.png](/img/v1.2/install/iso-installed.png) 1. 在首次登录时,你会收到为默认 `admin` 用户设置密码的提示。 - ![first-login.png](/img/v1.2/install/first-time-login.png) + ![first-login.png](/img/v1.2/install/first-time-login.png)