Skip to content

Commit

Permalink
Merge branch 'master' into 4776-storage-class-parms
Browse files Browse the repository at this point in the history
  • Loading branch information
innobead authored Sep 8, 2023
2 parents cbf485d + 0510034 commit 6b2de70
Show file tree
Hide file tree
Showing 5 changed files with 76 additions and 0 deletions.
11 changes: 11 additions & 0 deletions content/docs/1.5.2/deploy/important-notes/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -12,6 +12,17 @@ Please see [here](https://github.com/longhorn/longhorn/releases/tag/v{{< current

Please ensure your Kubernetes cluster is at least v1.21 before upgrading to Longhorn v{{< current-version >}} because this is the minimum version Longhorn v{{< current-version >}} supports.

### Offline Upgrade Required To Fully Prevent Unexpected Replica Expansion

Longhorn v1.5.2 introduces a new mechanism to prevent [unexpected replica
expansion](../../../../kb/troubleshooting-unexpected-expansion-leads-to-degredation-or-attach-failure). This
mechanism is entirely transparent. However, a volume is only protected if it is running a new version of longhorn-engine
inside a new version of longhorn-instance-manager and managed by a new version of longhorn-manager. The [live upgrade
process](../../deploy/upgrade/upgrade-engine#live-upgrade) results in a volume running a new version of longhorn-engine
in an old version of longhorn-instance-manager until it is detached (by scaling its consuming workload down) and
reattached (by scaling its consuming workload up). Consider scaling workloads down and back up again as soon as possible
after upgrading from a version without this mechanism (v1.5.1 or older) to v{{< current-version >}}.

### Attachment/Detachment Refactoring Side Effect On The Upgrade Process

In Longhorn v1.5.0, we refactored the internal volume attach/detach mechanism.
Expand Down
11 changes: 11 additions & 0 deletions content/docs/1.6.0/deploy/important-notes/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -12,6 +12,17 @@ Please see [here](https://github.com/longhorn/longhorn/releases/tag/v{{< current

Please ensure your Kubernetes cluster is at least v1.21 before upgrading to Longhorn v{{< current-version >}} because this is the minimum version Longhorn v{{< current-version >}} supports.

### Offline Upgrade Required To Fully Prevent Unexpected Replica Expansion

Longhorn v1.6.0 introduces a new mechanism to prevent [unexpected replica
expansion](../../../../kb/troubleshooting-unexpected-expansion-leads-to-degredation-or-attach-failure). This
mechanism is entirely transparent. However, a volume is only protected if it is running a new version of longhorn-engine
inside a new version of longhorn-instance-manager and managed by a new version of longhorn-manager. The [live upgrade
process](../../deploy/upgrade/upgrade-engine#live-upgrade) results in a volume running a new version of longhorn-engine
in an old version of longhorn-instance-manager until it is detached (by scaling its consuming workload down) and
reattached (by scaling its consuming workload up). Consider scaling workloads down and back up again as soon as possible
after upgrading from a version without this mechanism (v1.5.1 or older) to v{{< current-version >}}.

### Attachment/Detachment Refactoring Side Effect On The Upgrade Process

In Longhorn v1.5.0, we refactored the internal volume attach/detach mechanism.
Expand Down
14 changes: 14 additions & 0 deletions content/docs/1.6.0/references/settings.md
Original file line number Diff line number Diff line change
Expand Up @@ -65,6 +65,8 @@ weight: 1
- [Storage Minimal Available Percentage](#storage-minimal-available-percentage)
- [Storage Over Provisioning Percentage](#storage-over-provisioning-percentage)
- [Storage Reserved Percentage For Default Disk](#storage-reserved-percentage-for-default-disk)
- [Allow Empty Node Selector Volume](#allow-empty-node-selector-volume)
- [Allow Empty Disk Selector Volume](#allow-empty-disk-selector-volume)
- [Danger Zone](#danger-zone)
- [Concurrent Replica Rebuild Per Node Limit](#concurrent-replica-rebuild-per-node-limit)
- [Kubernetes Taint Toleration](#kubernetes-taint-toleration)
Expand Down Expand Up @@ -656,6 +658,18 @@ The reserved percentage specifies the percentage of disk space that will not be

This setting only affects the default disk of a new adding node or nodes when installing Longhorn.

#### Allow Empty Node Selector Volume

> Default: `true`
This setting allows replica of the volume without node selector to be scheduled on node with tags.

#### Allow Empty Disk Selector Volume

> Default: `true`
This setting allows replica of the volume without disk selector to be scheduled on disk with tags.

### Danger Zone

#### Concurrent Replica Rebuild Per Node Limit
Expand Down
2 changes: 2 additions & 0 deletions content/docs/1.6.0/volumes-and-nodes/scheduling.md
Original file line number Diff line number Diff line change
Expand Up @@ -46,6 +46,8 @@ For more information on settings that are relevant to scheduling replicas on nod
- [Replica Disk Level Soft Anti-Affinity](../../references/settings/#replica-disk-level-soft-anti-affinity)
- [Storage Minimal Available Percentage](../../references/settings/#storage-minimal-available-percentage)
- [Storage Over Provisioning Percentage](../../references/settings/#storage-over-provisioning-percentage)
- [Allow Empty Node Selector Volume](../../references/settings/#allow-empty-node-selector-volume)
- [Allow Empty Disk Selector Volume](../../references/settings/#allow-empty-disk-selector-volume)

### Notice
Longhorn relies on label `topology.kubernetes.io/zone=<Zone name of the node>` or `topology.kubernetes.io/region=<Region name of the node>` in the Kubernetes node object to identify the zone/region.
Expand Down
38 changes: 38 additions & 0 deletions content/kb/space-consumption-guideline.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,38 @@
---
title: "Space consumption guideline"
author: Shuo Wu
draft: false
date: 2023-08-25
categories:
- "instruction"
---

## Applicable versions

All Longhorn versions, but some features are introduced in v1.4.0 or v1.5.0

## Volumes consume much more space than expected

Due to the fact that Longhorn volumes can hold historic data as snapshots, the volume actual size can be much greater than the spec size. For more details, you can check [this section](../../docs/1.5.1/volumes-and-nodes/volume-size/#volume-actual-size) for a better understanding over the concept of volume size.

Besides, some operations like backup, rebuilding, or expansion, will lead to a hidden system snapshot creation. Hence there may be some snapshots even if users never create a snapshot for a volume manually.

To eliminate space being wasted the historic data/snapshots, we would recommend applying a recurring job like `snapshot-delete` that limits the snapshot counts of volumes. You can check [the recurring job section](../../docs/1.5.1/snapshots-and-backups/scheduling-backups-and-snapshots) and see how to work.

## Filesystem used size is much smaller than volume actual size

The reason for this symptom is explained in [the volume size section](../../docs/1.5.1/volumes-and-nodes/volume-size/#volume-actual-size) as well. Briefly, a Longhorn volume is a block device which does not recognize the filesystem used on top of it. Deleting a file is a filesystem layer operation that does not actually free up blocks from the underlying volume.

In order to ask the volume or the block device to release the blocks for removed files, you can rely on `fstrim`. This `trim` operation is introduced since Longhorn v1.4.0. Please see [this section](../../docs/1.5.1/volumes-and-nodes/trim-filesystem) for details.

If you make the trim operation automatic, you can apply `filesystem-trim` recurring jobs for volumes. But notice that this operation is similar to write operations, which may be resource-consuming. Please do not trigger the trim operations for lots of volumes at the same time.

## Disk exhaustion

In this case, the node is probably marked as NotReady due to the disk pressure. Therefore the most critical measure is to recover the node while avoiding losing volume data.

To do recover nodes and disk, we would recommend directly removing some redundant replica directories for the full disk. Here redundant replicas means that the corresponding volumes have healthy replicas in other disks. Later on Longhorn will automatically rebuild new replicas in other disks if possible.
Besides, users may need to expand the existing disks or add more disks to avoid future disk exhaustion issues.

Notice that the disk exhausion may be caused by replicas being unevenly scheduled. Users can check [setting Replica Auto Balance](../../docs/1.5.1/high-availability/auto-balance-replicas) for this scenario.

0 comments on commit 6b2de70

Please sign in to comment.