Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Update Chinese docs and remove doc IDs #434

Open
wants to merge 1 commit into
base: main
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion docs/advanced/addons/rancher-vcluster.md
Original file line number Diff line number Diff line change
Expand Up @@ -35,7 +35,7 @@ After installing the addon, you need to configure it from the Harvester UI as fo
1. Select **Advanced** > **Addons**.
1. Find the `rancher-vcluster` addon and select **⋮** > **Edit Config**.

![](/img/v1.2/rancher-vcluster/VclusterConfig.png)
![](/img/v1.2/rancher-vcluster/VclusterConfig.png)

1. In the **Hostname** field, enter a valid DNS record pointing to the Harvester VIP. This is essential as the vcluster ingress is synced to the parent Harvester cluster. A valid hostname is used to filter ingress traffic to the vcluster workload.
1. In the **Bootstrap Password** field, enter the bootstrap password for the new Rancher deployed on the vcluster.
Expand Down
1 change: 0 additions & 1 deletion docs/advanced/settings.md
Original file line number Diff line number Diff line change
@@ -1,5 +1,4 @@
---
id: index
sidebar_position: 1
sidebar_label: Settings
title: "Settings"
Expand Down
18 changes: 9 additions & 9 deletions docs/advanced/storagenetwork.md
Original file line number Diff line number Diff line change
Expand Up @@ -48,17 +48,17 @@ kubectl apply -f https://raw.githubusercontent.com/harvester/harvester/v1.1.0/de
## Configuration Example

- VLAN ID
- Please check with your network switch setting, and provide a dedicated VLAN ID for Storage Network.
- Please check with your network switch setting, and provide a dedicated VLAN ID for Storage Network.
- Well-configured Cluster Network and VLAN Config
- Please refer Networking page for more details and configure `Cluster Network` and `VLAN Config` but not `Networks`.
- Please refer Networking page for more details and configure `Cluster Network` and `VLAN Config` but not `Networks`.
- IP range for Storage Network
- IP range should not conflict or overlap with Kubernetes cluster networks(`10.42.0.0/16`, `10.43.0.0/16`, `10.52.0.0/16` and `10.53.0.0/16` are reserved).
- IP range should be in IPv4 CIDR format and Longhorn pods use Storage Network as follows:
- `instance-manger-e` and `instance-manager-r` pods: These require 2 IPs per node. During an upgrade, two versions of these pods will exist (old and new), and the old version will be deleted once the upgrade is successful.
- `backing-image-ds` pods: These are employed to process on-the-fly uploads and downloads of backing image data sources. These pods will be removed once the image uploads or downloads are completed.
- `backing-image-manager` pods: 1 IP per disk, similar to the instance manager pods. Two versions of these will coexist during an upgrade, and the old ones will be removed after the upgrade is completed.
- The required number of IPs is calculated using a simple formula: `Required Number of IPs = Number of Nodes * 4 + Number of Disks * 2 + Number of Images to Download/Upload`
- For example, if your cluster has five nodes, each node has two disks, and ten images will be uploaded simultaneously, the IP range should be greater than or equal to `/26` (`5 * 4 + 5 * 2 * 2 + 10 = 50`).
- IP range should not conflict or overlap with Kubernetes cluster networks(`10.42.0.0/16`, `10.43.0.0/16`, `10.52.0.0/16` and `10.53.0.0/16` are reserved).
- IP range should be in IPv4 CIDR format and Longhorn pods use Storage Network as follows:
- `instance-manger-e` and `instance-manager-r` pods: These require 2 IPs per node. During an upgrade, two versions of these pods will exist (old and new), and the old version will be deleted once the upgrade is successful.
- `backing-image-ds` pods: These are employed to process on-the-fly uploads and downloads of backing image data sources. These pods will be removed once the image uploads or downloads are completed.
- `backing-image-manager` pods: 1 IP per disk, similar to the instance manager pods. Two versions of these will coexist during an upgrade, and the old ones will be removed after the upgrade is completed.
- The required number of IPs is calculated using a simple formula: `Required Number of IPs = Number of Nodes * 4 + Number of Disks * 2 + Number of Images to Download/Upload`
- For example, if your cluster has five nodes, each node has two disks, and ten images will be uploaded simultaneously, the IP range should be greater than or equal to `/26` (`5 * 4 + 5 * 2 * 2 + 10 = 50`).


We will take the following configuration as an example to explain the details of the Storage Network
Expand Down
1 change: 0 additions & 1 deletion docs/airgap.md
Original file line number Diff line number Diff line change
@@ -1,5 +1,4 @@
---
id: airgap
sidebar_position: 3
sidebar_label: Air Gapped Environment
title: "Air Gapped Environment"
Expand Down
1 change: 0 additions & 1 deletion docs/authentication.md
Original file line number Diff line number Diff line change
@@ -1,5 +1,4 @@
---
id: authentication
sidebar_position: 5
sidebar_label: Authentication
title: "Authentication"
Expand Down
1 change: 0 additions & 1 deletion docs/faq.md
Original file line number Diff line number Diff line change
@@ -1,5 +1,4 @@
---
id: faq
sidebar_position: 17
sidebar_label: FAQ
title: "FAQ"
Expand Down
2 changes: 1 addition & 1 deletion docs/host/_category_.json
Original file line number Diff line number Diff line change
Expand Up @@ -5,6 +5,6 @@
"collapsed": false,
"link": {
"type": "doc",
"id": "host-management"
"id": "host"
}
}
1 change: 0 additions & 1 deletion docs/host/host.md
Original file line number Diff line number Diff line change
@@ -1,5 +1,4 @@
---
id: host-management
sidebar_position: 1
sidebar_label: Host Management
title: "Host Management"
Expand Down
3 changes: 1 addition & 2 deletions docs/index.md
Original file line number Diff line number Diff line change
@@ -1,5 +1,4 @@
---
id: overview
sidebar_position: 1
sidebar_label: Harvester Overview
slug: /
Expand All @@ -21,7 +20,7 @@ The Harvester architecture consists of cutting-edge open-source technologies:
- **Built on top of Kubernetes.** [Kubernetes](https://kubernetes.io/) has become the predominant infrastructure language across all form factors, and Harvester is an HCI solution with Kubernetes under the hood.
- **Virtualization management with Kubevirt.** [Kubevirt](https://kubevirt.io/) provides virtualization management using KVM on top of Kubernetes.
- **Storage management with Longhorn.** [Longhorn](https://longhorn.io/) provides distributed block storage and tiering.
- **Observability with Grafana and Prometheus.** [Granfana](https://grafana.com/) and [Prometheus](https://prometheus.io/) provide robust monitoring and logging.
- **Observability with Grafana and Prometheus.** [Grafana](https://grafana.com/) and [Prometheus](https://prometheus.io/) provide robust monitoring and logging.

![](/img/v1.2/architecture.svg)

Expand Down
1 change: 0 additions & 1 deletion docs/install/iso-install.md
Original file line number Diff line number Diff line change
@@ -1,5 +1,4 @@
---
id: index
sidebar_position: 2
sidebar_label: ISO Installation
title: "ISO Installation"
Expand Down
1 change: 0 additions & 1 deletion docs/install/requirements.md
Original file line number Diff line number Diff line change
@@ -1,5 +1,4 @@
---
id: requirements
sidebar_position: 1
sidebar_label: Hardware and Network Requirements
title: "Hardware and Network Requirements"
Expand Down
1 change: 0 additions & 1 deletion docs/logging/harvester-logging.md
Original file line number Diff line number Diff line change
@@ -1,5 +1,4 @@
---
id: harvester-logging
sidebar_position: 1
sidebar_label: Logging
title: "Logging"
Expand Down
1 change: 0 additions & 1 deletion docs/monitoring/harvester-monitoring.md
Original file line number Diff line number Diff line change
@@ -1,5 +1,4 @@
---
id: harvester-monitoring
sidebar_position: 1
sidebar_label: Monitoring
title: "Monitoring"
Expand Down
1 change: 0 additions & 1 deletion docs/networking/clusternetwork.md
Original file line number Diff line number Diff line change
@@ -1,5 +1,4 @@
---
id: index
sidebar_position: 1
sidebar_label: Cluster Network
title: "Cluster Network"
Expand Down
4 changes: 2 additions & 2 deletions docs/networking/loadbalancer.md
Original file line number Diff line number Diff line change
Expand Up @@ -39,7 +39,7 @@ Harvester VM load balancer doesn't support Windows VMs because the guest agent i
To create a new Harvester VM load balancer:
1. Go to the **Networks > Load Balancer** page and select **Create**.
1. Select the **Namespace** and specify the **Name**.
1. Go to the **Basic** tab to choose the IPAM mode, which can be **DHCP** or **IP Pool**. If you select **IP Pool**, prepare an IP pool first, specify the IP pool name, or choose **auto**. If you choose **auto**, the LB automatically selects an IP pool according to [the IP pool selection policy](/networking/ippool.md/#selection-policy).
1. Go to the **Basic** tab to choose the IPAM mode, which can be **DHCP** or **IP Pool**. If you select **IP Pool**, prepare an IP pool first, specify the IP pool name, or choose **auto**. If you choose **auto**, the LB automatically selects an IP pool according to [the IP pool selection policy](./ippool.md#selection-policy).
![](/img/v1.2/networking/create-lb-01.png)
1. Go to the **Listeners** tab to add listeners. You must specify the **Port**, **Protocol**, and **Backend Port** for each listener.
![](/img/v1.2/networking/create-lb-02.png)
Expand All @@ -66,4 +66,4 @@ In conjunction with Harvester Cloud Provider, the Harvester load balancer provid
![](/img/v1.2/networking/guest-kubernetes-cluster-lb.png)
When you create, update, or delete an LB service on a guest cluster with Harvester Cloud Provider, the Harvester Cloud Provider will create a Harvester LB automatically.

For more details, refer to [Harvester Cloud Provider](/rancher/cloud-provider.md).
For more details, refer to [Harvester Cloud Provider](../rancher/cloud-provider.md).
2 changes: 1 addition & 1 deletion docs/rancher/cloud-provider.md
Original file line number Diff line number Diff line change
Expand Up @@ -44,7 +44,7 @@ For a detailed support matrix, please refer to the **Harvester CCM & CSI Driver
### Deploying to the RKE1 Cluster with Harvester Node Driver
When spinning up an RKE cluster using the Harvester node driver, you can perform two steps to deploy the `Harvester` cloud provider:

1. Select `Harvester(Out-of-tree)` option.
1. Select `Harvester (Out-of-tree)` option.

![](/img/v1.2/rancher/rke-cloud-provider.png)

Expand Down
2 changes: 1 addition & 1 deletion docs/rancher/csi-driver.md
Original file line number Diff line number Diff line change
Expand Up @@ -34,7 +34,7 @@ Currently, the Harvester CSI driver only supports single-node read-write(RWO) vo

### Deploying with Harvester RKE1 node driver

- Select the `Harvester(Out-of-tree)` option.
- Select the `Harvester (Out-of-tree)` option.

![](/img/v1.2/rancher/rke-cloud-provider.png)

Expand Down
1 change: 0 additions & 1 deletion docs/rancher/rancher-integration.md
Original file line number Diff line number Diff line change
@@ -1,5 +1,4 @@
---
id: index
sidebar_position: 1
sidebar_label: Rancher Integration
title: "Rancher Integration"
Expand Down
1 change: 0 additions & 1 deletion docs/terraform/terraform-provider.md
Original file line number Diff line number Diff line change
@@ -1,5 +1,4 @@
---
id: terraform-provider
sidebar_position: 1
sidebar_label: Harvester Terraform Provider
title: "Harvester Terraform Provider"
Expand Down
1 change: 0 additions & 1 deletion docs/troubleshooting/installation.md
Original file line number Diff line number Diff line change
@@ -1,5 +1,4 @@
---
id: index
sidebar_position: 1
sidebar_label: Installation
title: "Installation"
Expand Down
1 change: 0 additions & 1 deletion docs/upgrade/automatic.md
Original file line number Diff line number Diff line change
@@ -1,5 +1,4 @@
---
id: index
sidebar_position: 1
sidebar_label: Upgrading Harvester
title: "Upgrading Harvester"
Expand Down
1 change: 0 additions & 1 deletion docs/upload-image.md
Original file line number Diff line number Diff line change
@@ -1,5 +1,4 @@
---
id: upload-image
sidebar_position: 6
sidebar_label: Upload Images
title: "Upload Images"
Expand Down
1 change: 0 additions & 1 deletion docs/vm/create-vm.md
Original file line number Diff line number Diff line change
@@ -1,5 +1,4 @@
---
id: index
sidebar_position: 1
sidebar_label: Create a Virtual Machine
title: "Create a Virtual Machine"
Expand Down
4 changes: 2 additions & 2 deletions docs/vm/hotplug-volume.md
Original file line number Diff line number Diff line change
Expand Up @@ -28,9 +28,9 @@ The following steps assume that you have a running VM and a ready volume:
1. Go to the **Virtual Machines** page.
1. Find the VM that you want to add a volume to and select **⋮ > Add Volume**.

![Add Volume Button](/img/v1.2/vm/add-volume-button.png)
![Add Volume Button](/img/v1.2/vm/add-volume-button.png)

1. Enter the **Name** and select the **Volume**.
1. Click **Apply**.

![Add Volume Panel](/img/v1.2/vm/add-volume-panel.png)
![Add Volume Panel](/img/v1.2/vm/add-volume-panel.png)
4 changes: 2 additions & 2 deletions docs/vm/resource-overcommit.md
Original file line number Diff line number Diff line change
Expand Up @@ -34,12 +34,12 @@ Users can modify the global `overcommit-config` by following the steps below, an

1. Go to the **Advanced > Settings** page.

![overcommit page](/img/v1.2/vm/overcommit-page.png)
![overcommit page](/img/v1.2/vm/overcommit-page.png)

1. Find the `overcommit-config` setting.
1. Configure the desired CPU, Memory, and Storage ratio.

![overcommit panel](/img/v1.2/vm/overcommit-panel.png)
![overcommit panel](/img/v1.2/vm/overcommit-panel.png)

## Configure overcommit for a single virtual machine

Expand Down
1 change: 0 additions & 1 deletion docs/volume/create-volume.md
Original file line number Diff line number Diff line change
@@ -1,5 +1,4 @@
---
id: index
sidebar_position: 1
sidebar_label: Create a Volume
title: "Create a Volume"
Expand Down
12 changes: 6 additions & 6 deletions docs/volume/volume-snapshots.md
Original file line number Diff line number Diff line change
Expand Up @@ -21,11 +21,11 @@ You can create a volume snapshot from an existing volume by following these step

1. Choose the volume that you want to take a snapshot of and select **⋮ > Take Snapshot**.

![create-volume-snapshot-1](/img/v1.2/volume/create-volume-snapshot-1.png)
![create-volume-snapshot-1](/img/v1.2/volume/create-volume-snapshot-1.png)

1. Enter a **Name** for the snapshot.

![create-volume-snapshot-2](/img/v1.2/volume/create-volume-snapshot-2.png)
![create-volume-snapshot-2](/img/v1.2/volume/create-volume-snapshot-2.png)

1. Select **Create** to finish creating a new volume snapshot.

Expand All @@ -45,16 +45,16 @@ You can restore a new volume from an existing volume snapshot by following these

1. Select **⋮ > Restore**.

![restore-volume-snapshot-1](/img/v1.2/volume/restore-volume-snapshot-1.png)
![restore-volume-snapshot-1](/img/v1.2/volume/restore-volume-snapshot-1.png)

![restore-volume-snapshot-2](/img/v1.2/volume/restore-volume-snapshot-2.png)
![restore-volume-snapshot-2](/img/v1.2/volume/restore-volume-snapshot-2.png)

1. Specify the **Name** of the new volume.

![restore-volume-snapshot-3](/img/v1.2/volume/restore-volume-snapshot-3.png)
![restore-volume-snapshot-3](/img/v1.2/volume/restore-volume-snapshot-3.png)

1. If the source volume is not an image volume, you can select a different **StorageClass**. You can not change the **StorageClass** if the source volume is an image volume.

![restore-volume-snapshot-4](/img/v1.2/volume/restore-volume-snapshot-4.png)
![restore-volume-snapshot-4](/img/v1.2/volume/restore-volume-snapshot-4.png)

1. Select **Create** to finish restoring a new volume.
4 changes: 4 additions & 0 deletions i18n/zh/docusaurus-plugin-content-docs/current.json
Original file line number Diff line number Diff line change
Expand Up @@ -498,5 +498,9 @@
"sidebar.api.doc.List Virtual Machines For All Namespaces": {
"message": "List Virtual Machines For All Namespaces",
"description": "The label for the doc item List Virtual Machines For All Namespaces in sidebar api, linking to the doc api/list-virtual-machine-for-all-namespaces"
},
"sidebar.docs.category.Available Addons": {
"message": "可用插件",
"description": "The label for category Available Addons in sidebar docs"
}
}
46 changes: 33 additions & 13 deletions i18n/zh/docusaurus-plugin-content-docs/current/advanced/addons.md
Original file line number Diff line number Diff line change
@@ -1,28 +1,48 @@
---
sidebar_position: 4
sidebar_position: 5
sidebar_label: 插件
title: "插件"
---

_从 v1.1.0 起可用_

从 v1.1.0 开始,Harvester 将使用插件(Addon)来提供可选功能。
Harvester 将使用插件(Addon)来提供可选功能。

这样,我们能够确保 Harvester 占用较少的空间,同时用户能够根据他们的实际用例或要求启用/禁用功能。

不同的底层插件支持不同程度的定制。

v1.1.0 目前附带了两个插件:
* [pcidevices-controller](./pcidevices.md)
* [vm-import-controller](./vmimport.md)

![](/img/v1.2/addons/DefaultAddons.png)

_从 v1.2.0 起可用_

v1.2.0 附带了另外两个插件:
_从 v1.1.0 起可用_

Harvester v1.2.0 附带了五个插件:
* [pcidevices-controller](./addons/pcidevices.md)
* [vm-import-controller](./addons/vmimport.md)
* [rancher-monitoring](../monitoring/harvester-monitoring.md)
* [rancher-logging](../logging/harvester-logging.md)
* [harvester-seeder](./addons/seeder.md)

![](/img/v1.2/addons/AddonsV120.png)

:::note

**harvester-seeder** 作为 Harvester v1.2.0 中的实验性功能发布,并在 **Name** 中添加了一个 **Experimental** 标签。

:::

你可以通过选择插件并从 **Basic** 选项卡中选择 **⋮** > **Enable** 来启用**已禁用**的插件。

![](/img/v1.2/addons/enable-rancher-logging-addon.png)

成功启用插件后,**State** 将变为 **DeploySuccessful**。

![](/img/v1.2/addons/deploy-successful-addon.png)

你可以通过选择插件并从 **Basic** 选项卡中选择 **⋮** > **Disable** 来禁用**已启用**的插件。

![](/img/v1.2/addons/disable-rancher-monitoring-addon.png)

当插件成功禁用后,**State** 将变为 **Disabled**。

:::note

禁用插件后,配置数据将被存储,以便在再次启用插件时重复使用。

:::
Original file line number Diff line number Diff line change
@@ -0,0 +1,6 @@
{
"position": 6,
"label": "可用插件",
"collapsible": true,
"collapsed": true
}
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
---
sidebar_position: 6
sidebar_position: 2
sidebar_label: PCI 设备
title: "PCI 设备(实验功能)"
title: "PCI 设备"
---

_从 v1.1.0 起可用_
Expand All @@ -15,6 +15,8 @@ _从 v1.1.0 起可用_

![](/img/v1.2/vm-import-controller/EnableAddon.png)

成功部署 `pcidevices-controller` 插件后,可能需要几分钟时间进行扫描并使 PCIDevice CRD 变得可用。
![](/img/v1.2/pcidevices/PcideviceEnabled.png)
## 在 PCI 设备上启用直通

1. 前往 `Advanced > PCI Devices` 页面:
Expand Down Expand Up @@ -57,3 +59,32 @@ _从 v1.1.0 起可用_
## 在 VM 内为 PCI 设备安装驱动程序

这里涉及的操作与在主机中安装驱动程序一样。PCI 透传功能将主机设备绑定到 `vfio-pci` 驱动程序,让 VM 能够使用自己的驱动程序。你可以查看安装在 VM 中的 NVIDIA 驱动程序的[屏幕截图](https://tobilehman.com/posts/suse-harvester-pci/#toc),其中包括证明设备驱动程序可以正常工作的 CUDA 示例。

## SRIOV 网络设备
_从 v1.2.0 起可用_

![](/img/v1.2/pcidevices/SriovNetworkDevicesLink.png)

`pcidevices-controller` 插件现在可以扫描底层主机上的网络接口并检查它们是否支持 SRIOV Virtual Function (VF)。如果找到有效的设备,`pcidevices-controller` 将生成一个新的`SRIOVNetworkDevice` 对象。

![](/img/v1.2/pcidevices/SriovNetworkDevicesList.png)

要在 SriovNetworkDevice 上创建 VF,你可以单击 **⋮ > Enable**,然后定义 **Number of Virtual Functions**。
![](/img/v1.2/pcidevices/SriovNetworkDeviceEnable.png)

![](/img/v1.2/pcidevices/SriovNetworkVFDefinition.png)

`pcidevices-controller` 将定义网络接口上的 VF,并为新创建的 VF 报告新的 PCI 设备状态。

![](/img/v1.2/pcidevices/SriovNetworkDevicesVFStatus.png)

下次重新扫描时,`pcidevices-controller` 将为 VF 创建 PCIDevices。这可能需要 1 分钟的时间。

你现在可以导航到 **PCI Devices** 页面来查看新设备。

我们还引入了一个新的过滤器来帮助你通过底层网络接口来过滤 PCI 设备。

![](/img/v1.2/pcidevices/SriovNetworkDevicesFilter.png)

新创建的 PCI 设备可以像其他 PCI 设备一样直通到虚拟机。
![](/img/v1.2/pcidevices/SriovNetworkDevicesFilterResult.png)
Loading