From ae331389f48ece433890c3c9177b8c0a4c746a6b Mon Sep 17 00:00:00 2001 From: Shuo Wu Date: Fri, 25 Aug 2023 21:35:07 +0800 Subject: [PATCH 1/3] docs: Add space-consumption-guideline Longhorn 6592 Signed-off-by: Shuo Wu --- content/kb/space-consumption-guideline.md | 38 +++++++++++++++++++++++ 1 file changed, 38 insertions(+) create mode 100644 content/kb/space-consumption-guideline.md diff --git a/content/kb/space-consumption-guideline.md b/content/kb/space-consumption-guideline.md new file mode 100644 index 000000000..6a85e68b7 --- /dev/null +++ b/content/kb/space-consumption-guideline.md @@ -0,0 +1,38 @@ +--- +title: "Space consumption guideline" +author: Shuo Wu +draft: false +date: 2023-08-25 +categories: +- "instruction" +--- + +## Applicable versions + +All Longhorn versions, but some features are introduced in v1.4.0 or v1.5.0 + +## Volumes consume much more space than expected + +Due to the fact that Longhorn volumes can hold historic data as snapshots, the volume actual size can be much greater than the spec size. For more details, you can check [this section](../../docs/1.5.1/volumes-and-nodes/volume-size/#volume-actual-size) for a better understanding over the concept of volume size. + +Besides, some operations like backup, rebuilding, or expansion, will lead to a hidden system snapshot creation. Hence there may be some snapshots even if users never create a snapshot for a volume manually. + +To eliminate space being wasted the historic data/snapshots, we would recommend applying a recurring job like `snapshot-delete` that limits the snapshot counts of volumes. You can check [the recurring job section](../../docs/1.5.1/snapshots-and-backups/scheduling-backups-and-snapshots) and see how to work. + +## Filesystem used size is much smaller than volume actual size + +The reason for this symptom is explained in [the volume size section](../../docs/1.5.1/volumes-and-nodes/volume-size/#volume-actual-size) as well. Briefly, a Longhorn volume is a block device which does not recognize the filesystem used on top of it. Deleting a file is a filesystem layer operation that does not actually free up blocks from the underlying volume. + +In order to ask the volume or the block device to release the blocks for removed files, you can rely on `fstrim`. This `trim` operation is introduced since Longhorn v1.4.0. Please see [this section](../../docs/1.5.1/volumes-and-nodes/trim-filesystem) for details. + +If you make the trim operation automatic, you can apply `filesystem-trim` recurring jobs for volumes. But notice that this operation is similar to write operations, which may be resource-consuming. Please do not trigger the trim operations for lots of volumes at the same time. + +## Disk exhaustion + +In this case, the node is probably marked as NotReady due to the disk pressure. Therefore the most critical measure is to recover the node while avoiding losing volume data. + +To do recover nodes and disk, we would recommend directly removing some redundant replica directories for the full disk. Here redundant replicas means that the corresponding volumes have healthy replicas in other disks. Later on Longhorn will automatically rebuild new replicas in other disks if possible. +Besides, users may need to expand the existing disks or add more disks to avoid future disk exhaustion issues. + +Notice that the disk exhausion may be caused by replicas being unevenly scheduled. Users can check [setting Replica Auto Balance](../../docs/1.5.1/high-availability/auto-balance-replicas) for this scenario. + From 76ea5fdd63717b98ac1c51c57043ac7cdabbee87 Mon Sep 17 00:00:00 2001 From: Eric Weber Date: Tue, 29 Aug 2023 10:47:06 -0500 Subject: [PATCH 2/3] Offline upgrade is required to prevent unexpected replica expansion Longhorn 5845 Signed-off-by: Eric Weber --- content/docs/1.5.2/deploy/important-notes/index.md | 11 +++++++++++ content/docs/1.6.0/deploy/important-notes/index.md | 11 +++++++++++ 2 files changed, 22 insertions(+) diff --git a/content/docs/1.5.2/deploy/important-notes/index.md b/content/docs/1.5.2/deploy/important-notes/index.md index d050ce481..2767ddba6 100644 --- a/content/docs/1.5.2/deploy/important-notes/index.md +++ b/content/docs/1.5.2/deploy/important-notes/index.md @@ -12,6 +12,17 @@ Please see [here](https://github.com/longhorn/longhorn/releases/tag/v{{< current Please ensure your Kubernetes cluster is at least v1.21 before upgrading to Longhorn v{{< current-version >}} because this is the minimum version Longhorn v{{< current-version >}} supports. +### Offline Upgrade Required To Fully Prevent Unexpected Replica Expansion + +Longhorn v1.5.2 introduces a new mechanism to prevent [unexpected replica +expansion](../../../../kb/troubleshooting-unexpected-expansion-leads-to-degredation-or-attach-failure). This +mechanism is entirely transparent. However, a volume is only protected if it is running a new version of longhorn-engine +inside a new version of longhorn-instance-manager and managed by a new version of longhorn-manager. The [live upgrade +process](../../deploy/upgrade/upgrade-engine#live-upgrade) results in a volume running a new version of longhorn-engine +in an old version of longhorn-instance-manager until it is detached (by scaling its consuming workload down) and +reattached (by scaling its consuming workload up). Consider scaling workloads down and back up again as soon as possible +after upgrading from a version without this mechanism (v1.5.1 or older) to v{{< current-version >}}. + ### Attachment/Detachment Refactoring Side Effect On The Upgrade Process In Longhorn v1.5.0, we refactored the internal volume attach/detach mechanism. diff --git a/content/docs/1.6.0/deploy/important-notes/index.md b/content/docs/1.6.0/deploy/important-notes/index.md index d050ce481..019025f4b 100644 --- a/content/docs/1.6.0/deploy/important-notes/index.md +++ b/content/docs/1.6.0/deploy/important-notes/index.md @@ -12,6 +12,17 @@ Please see [here](https://github.com/longhorn/longhorn/releases/tag/v{{< current Please ensure your Kubernetes cluster is at least v1.21 before upgrading to Longhorn v{{< current-version >}} because this is the minimum version Longhorn v{{< current-version >}} supports. +### Offline Upgrade Required To Fully Prevent Unexpected Replica Expansion + +Longhorn v1.6.0 introduces a new mechanism to prevent [unexpected replica +expansion](../../../../kb/troubleshooting-unexpected-expansion-leads-to-degredation-or-attach-failure). This +mechanism is entirely transparent. However, a volume is only protected if it is running a new version of longhorn-engine +inside a new version of longhorn-instance-manager and managed by a new version of longhorn-manager. The [live upgrade +process](../../deploy/upgrade/upgrade-engine#live-upgrade) results in a volume running a new version of longhorn-engine +in an old version of longhorn-instance-manager until it is detached (by scaling its consuming workload down) and +reattached (by scaling its consuming workload up). Consider scaling workloads down and back up again as soon as possible +after upgrading from a version without this mechanism (v1.5.1 or older) to v{{< current-version >}}. + ### Attachment/Detachment Refactoring Side Effect On The Upgrade Process In Longhorn v1.5.0, we refactored the internal volume attach/detach mechanism. From 0510034217fca2dbb81f95f04eb0d90d3eab10f1 Mon Sep 17 00:00:00 2001 From: Jack Lin Date: Wed, 30 Aug 2023 15:10:07 +0800 Subject: [PATCH 3/3] doc(scheduling): add allow empty selector settings ref: longhorn/longhorn 4826 Signed-off-by: Jack Lin --- content/docs/1.6.0/references/settings.md | 14 ++++++++++++++ content/docs/1.6.0/volumes-and-nodes/scheduling.md | 2 ++ 2 files changed, 16 insertions(+) diff --git a/content/docs/1.6.0/references/settings.md b/content/docs/1.6.0/references/settings.md index 183fea236..45b97fe1f 100644 --- a/content/docs/1.6.0/references/settings.md +++ b/content/docs/1.6.0/references/settings.md @@ -64,6 +64,8 @@ weight: 1 - [Storage Minimal Available Percentage](#storage-minimal-available-percentage) - [Storage Over Provisioning Percentage](#storage-over-provisioning-percentage) - [Storage Reserved Percentage For Default Disk](#storage-reserved-percentage-for-default-disk) + - [Allow Empty Node Selector Volume](#allow-empty-node-selector-volume) + - [Allow Empty Disk Selector Volume](#allow-empty-disk-selector-volume) - [Danger Zone](#danger-zone) - [Concurrent Replica Rebuild Per Node Limit](#concurrent-replica-rebuild-per-node-limit) - [Kubernetes Taint Toleration](#kubernetes-taint-toleration) @@ -643,6 +645,18 @@ The reserved percentage specifies the percentage of disk space that will not be This setting only affects the default disk of a new adding node or nodes when installing Longhorn. +#### Allow Empty Node Selector Volume + +> Default: `true` + +This setting allows replica of the volume without node selector to be scheduled on node with tags. + +#### Allow Empty Disk Selector Volume + +> Default: `true` + +This setting allows replica of the volume without disk selector to be scheduled on disk with tags. + ### Danger Zone #### Concurrent Replica Rebuild Per Node Limit diff --git a/content/docs/1.6.0/volumes-and-nodes/scheduling.md b/content/docs/1.6.0/volumes-and-nodes/scheduling.md index dae01972f..7de0bc4b2 100644 --- a/content/docs/1.6.0/volumes-and-nodes/scheduling.md +++ b/content/docs/1.6.0/volumes-and-nodes/scheduling.md @@ -43,6 +43,8 @@ For more information on settings that are relevant to scheduling replicas on nod - [Replica Zone Level Soft Anti-Affinity](../../references/settings/#replica-zone-level-soft-anti-affinity) - [Storage Minimal Available Percentage](../../references/settings/#storage-minimal-available-percentage) - [Storage Over Provisioning Percentage](../../references/settings/#storage-over-provisioning-percentage) +- [Allow Empty Node Selector Volume](../../references/settings/#allow-empty-node-selector-volume) +- [Allow Empty Disk Selector Volume](../../references/settings/#allow-empty-disk-selector-volume) ### Notice Longhorn relies on label `topology.kubernetes.io/zone=` or `topology.kubernetes.io/region=` in the Kubernetes node object to identify the zone/region.