diff --git a/docs/_images/PMM_Add_Instance_PostgreSQL.png b/docs/_images/PMM_Add_Instance_PostgreSQL.png index 90de870ec2..bce68609b6 100644 Binary files a/docs/_images/PMM_Add_Instance_PostgreSQL.png and b/docs/_images/PMM_Add_Instance_PostgreSQL.png differ diff --git a/docs/_images/PMM_Add_Instance_PostgreSQL_TLS.png b/docs/_images/PMM_Add_Instance_PostgreSQL_TLS.png index 1f5a915ddf..0b49d5528c 100644 Binary files a/docs/_images/PMM_Add_Instance_PostgreSQL_TLS.png and b/docs/_images/PMM_Add_Instance_PostgreSQL_TLS.png differ diff --git a/docs/_images/PMM_Add_Instance_PostgreSQL_autodiscovery_custom.png b/docs/_images/PMM_Add_Instance_PostgreSQL_autodiscovery_custom.png new file mode 100644 index 0000000000..dc9db2825c Binary files /dev/null and b/docs/_images/PMM_Add_Instance_PostgreSQL_autodiscovery_custom.png differ diff --git a/docs/_images/PMM_Add_Instance_PostgreSQL_autodiscovery_disabled.png b/docs/_images/PMM_Add_Instance_PostgreSQL_autodiscovery_disabled.png new file mode 100644 index 0000000000..b9fa83ec24 Binary files /dev/null and b/docs/_images/PMM_Add_Instance_PostgreSQL_autodiscovery_disabled.png differ diff --git a/docs/_images/PMM_Add_Instance_PostgreSQL_autodiscovery_enabled.png b/docs/_images/PMM_Add_Instance_PostgreSQL_autodiscovery_enabled.png new file mode 100644 index 0000000000..ca4c6ca3d3 Binary files /dev/null and b/docs/_images/PMM_Add_Instance_PostgreSQL_autodiscovery_enabled.png differ diff --git a/docs/_images/PMM_Inventory_Service_Selection.png b/docs/_images/PMM_Inventory_Service_Selection.png index 03227b0d5e..2b6ce61e34 100644 Binary files a/docs/_images/PMM_Inventory_Service_Selection.png and b/docs/_images/PMM_Inventory_Service_Selection.png differ diff --git a/docs/_images/PMM_Inventory_cluster_view_filter.png b/docs/_images/PMM_Inventory_cluster_view_filter.png new file mode 100644 index 0000000000..ad21a32d85 Binary files /dev/null and b/docs/_images/PMM_Inventory_cluster_view_filter.png differ diff --git a/docs/dbaas/DBaaS_template.md b/docs/dbaas/DBaaS_template.md index 9abd59e2c1..7d458f8dd6 100644 --- a/docs/dbaas/DBaaS_template.md +++ b/docs/dbaas/DBaaS_template.md @@ -1,5 +1,8 @@ # Database cluster Templates +!!! caution alert alert-primary "Do not use for mission-critical workloads" + DBaaS feature is deprecated. We encourage you to use [Percona Everest](http://per.co.na/pmm-to-everest) instead. Check our [Migration guide](http://per.co.na/pmm-to-everest-guide). + Database clusters can be created from templates using PMM. Database cluster Template allows operators to customize Database Clusters based on their requirements, environments, or infrastructure. Examples diff --git a/docs/dbaas/architecture.md b/docs/dbaas/architecture.md index ec02186d83..42f269ef78 100644 --- a/docs/dbaas/architecture.md +++ b/docs/dbaas/architecture.md @@ -1,14 +1,12 @@ # DBaaS architecture -!!! caution alert alert-warning "Caution" - DBaaS functionality is currently in [technical preview](../details/glossary.md#technical-preview) and is subject to change. +!!! caution alert alert-primary "Do not use for mission-critical workloads" + DBaaS feature is deprecated. We encourage you to use [Percona Everest](http://per.co.na/pmm-to-everest) instead. Check our [Migration guide](http://per.co.na/pmm-to-everest-guide). - -DBaaS is built on top of PMM and Kubernetes and the high-level architecture is shown below +DBaaS is built on top of PMM and Kubernetes and the high-level architecture is shown below: ![!](../_images/dbaas_arch.jpg) - In DBaaS, the role of PMM is as follows: 1. Expose Public REST API diff --git a/docs/dbaas/backup_restore.md b/docs/dbaas/backup_restore.md index 066d1e8c77..d0182bdc9c 100644 --- a/docs/dbaas/backup_restore.md +++ b/docs/dbaas/backup_restore.md @@ -1,7 +1,7 @@ # DBaaS backup and restore -!!! caution alert alert-warning "Caution" - DBaaS functionality is currently in [technical preview](../details/glossary.md#technical-preview) and is subject to change. +!!! caution alert alert-primary "Do not use for mission-critical workloads" + DBaaS feature is deprecated. We encourage you to use [Percona Everest](http://per.co.na/pmm-to-everest) instead. Check our [Migration guide](http://per.co.na/pmm-to-everest-guide). You can add a backup schedule while creating DB clusters in DBaaS. This feature is a fusion of backup management and DBaaS in PMM. Currently, DBaaS only supports scheduled backups, which can only be enabled when a database cluster is created. diff --git a/docs/dbaas/databases.md b/docs/dbaas/databases.md index 9cd5293974..7bc9d36521 100644 --- a/docs/dbaas/databases.md +++ b/docs/dbaas/databases.md @@ -1,7 +1,7 @@ ## DB clusters -!!! caution alert alert-warning "Caution" - DBaaS functionality is currently in [technical preview](../details/glossary.md#technical-preview) and is subject to change. +!!! caution alert alert-primary "Do not use for mission-critical workloads" + DBaaS feature is deprecated. We encourage you to use [Percona Everest](http://per.co.na/pmm-to-everest) instead. Check our [Migration guide](http://per.co.na/pmm-to-everest-guide). ### Add a DB Cluster diff --git a/docs/dbaas/get-started.md b/docs/dbaas/get-started.md index 22235e5f13..13ef6bd7c9 100644 --- a/docs/dbaas/get-started.md +++ b/docs/dbaas/get-started.md @@ -1,7 +1,7 @@ # Getting started with DBaaS -!!! caution alert alert-warning "Caution" - DBaaS functionality is currently in [technical preview](../details/glossary.md#technical-preview) and is subject to change. Only AWS EKS, Minikube and CIVO clusters are supported. +!!! caution alert alert-primary "Do not use for mission-critical workloads" + DBaaS feature is deprecated. We encourage you to use [Percona Everest](http://per.co.na/pmm-to-everest) instead. Check our [Migration guide](http://per.co.na/pmm-to-everest-guide). The DBaaS dashboard is where you add, remove, and operate on Kubernetes and database clusters. diff --git a/docs/dbaas/index.md b/docs/dbaas/index.md index 9c3a46e055..3d19c29c20 100644 --- a/docs/dbaas/index.md +++ b/docs/dbaas/index.md @@ -1,14 +1,13 @@ # Introduction to Database as a service (DBaaS) +!!! caution alert alert-primary "Do not use for mission-critical workloads" + - DBaaS feature is deprecated. We encourage you to use [Percona Everest](http://per.co.na/pmm-to-everest) instead. Check our [Migration guide](http://per.co.na/pmm-to-everest-guide). + - DBaaS feature is available for PMM Admin users. + Database as a service (DBaaS) feature of Percona Monitoring and Management (PMM) is an open source solution to run MySQL and MongoDB clusters on Kubernetes. It allows you to utilize the benefits of Kubernetes and Percona's operators to run and manage database clusters. -!!! caution alert alert-primary "Do not use for mission critical workloads" - DBaaS feature is available for PMM Admin users - DBaaS functionality is currently in [technical preview](../details/glossary.md#technical-preview) and is subject to change. - - ## Start here - [Architecture and how DBaaS works](architecture.html) diff --git a/docs/dbaas/setting-up.md b/docs/dbaas/setting-up.md index d10f650692..a34a62fe43 100644 --- a/docs/dbaas/setting-up.md +++ b/docs/dbaas/setting-up.md @@ -1,7 +1,7 @@ # Setting up DBaaS -!!! caution alert alert-warning "Caution" - DBaaS functionality is currently in [technical preview](../details/glossary.md#technical-preview) and is subject to change. +!!! caution alert alert-primary "Do not use for mission-critical workloads" + DBaaS feature is deprecated. We encourage you to use [Percona Everest](http://per.co.na/pmm-to-everest) instead. Check our [Migration guide](http://per.co.na/pmm-to-everest-guide). To use the Database as a Service (DBaaS) solution in PMM there are a few things that need to be setup first including a suitable Kubernetes Cluster. If you've already got a kubernetes cluster you can jump ahead and [enable DBaaS in PMM](../dbaas/get-started.html). diff --git a/docs/dbaas/troubleshoot-kubernetes.md b/docs/dbaas/troubleshoot-kubernetes.md index 6c51ea5610..da9a771ff1 100644 --- a/docs/dbaas/troubleshoot-kubernetes.md +++ b/docs/dbaas/troubleshoot-kubernetes.md @@ -1,5 +1,9 @@ ## Troubleshooting Kubernetes provisioning +!!! caution alert alert-primary "Do not use for mission-critical workloads" + DBaaS feature is deprecated. We encourage you to use [Percona Everest](http://per.co.na/pmm-to-everest) instead. Check our [Migration guide](http://per.co.na/pmm-to-everest-guide). + + There are two things that might go wrong during the provisioning: 1. OLM installation diff --git a/docs/details/commands/pmm-admin.md b/docs/details/commands/pmm-admin.md index 148997a09a..755d8cbd8d 100644 --- a/docs/details/commands/pmm-admin.md +++ b/docs/details/commands/pmm-admin.md @@ -77,6 +77,9 @@ PMM communicates with the PMM Server via a PMM agent process. `--group=` : Group name for external services. Default: `external` +`--expose-exporter` (This flag is availble starting with PMM 2.41.0.) +: If you enable this flag, any IP address on the local network and anywhere on the internet can access exporter endpoints. If the flag is disabled/not present, exporter endpoints can be accessed only locally. The flag is disabled by default + ## COMMANDS ### GENERAL COMMANDS diff --git a/docs/details/commands/pmm-agent.md b/docs/details/commands/pmm-agent.md index 01da8349d6..9391f9fbc1 100644 --- a/docs/details/commands/pmm-agent.md +++ b/docs/details/commands/pmm-agent.md @@ -64,6 +64,8 @@ Most options can be set via environment variables (shown in parentheses). | `--trace` | `PMM_AGENT_TRACE` | Enable trace output (implies `--debug`). | `-h`, `--help` | | Show help (synonym for `pmm-agent help`). | `--version` | | Show application version, PMM version, time-stamp, git commit hash and branch. +| `--expose-exporter` (This flag is available starting with PMM 2.41.0.)| | If you enable this flag, any IP address on the local network and anywhere on the internet can access node exporter endpoints. If the flag is disabled, node exporter endpoints can be accessed only locally. + ## USAGE AND EXAMPLES OF `paths-base` FLAG diff --git a/docs/details/dashboards/dashboard-inventory.md b/docs/details/dashboards/dashboard-inventory.md index d7f99b6236..0d7c828f85 100644 --- a/docs/details/dashboards/dashboard-inventory.md +++ b/docs/details/dashboards/dashboard-inventory.md @@ -13,25 +13,39 @@ Inventory objects form a hierarchy with Node at the top, then Service and Agents The **Services** tab displays the individual services, the nodes on which they run, and the Agents that help collect the service metrics along with the following information: -**Service name** - The name or identifier associated with the service being monitored. +| **Column Name**| **Description**| +|--------------|--------------------------------| +| Service name|The name or identifier associated with the service being monitored.| +| Node name | Name or identifier associated with a specific node.| +| Monitoring status| The **Monitoring** column summarizes the status of all the Agents assigned to the service. | +| Address | The IP address or DNS where the service is currently running. | +| Port | The port number on which the service is running. || +| Options |* You can check **QAN** information and the **Dashboard** for each service by clicking on the **** icon

* You can also check additional information about the service, by clicking on the **** icon. This expands the service entry to show reference information like service labels and IDs.| -**Node Name** - Name or identifier associated with a specific node. -**Monitoring status** - The **Monitoring** column summarizes the status of all the Agents assigned to the service. +![!image](../../_images/PMM_Inventory_Service_Selection.png) -**Address** - The IP address or DNS where the service is currently running. +#### Attributes -**Port** - The port number on which the service is running. +These are some of the atributes for a service: -You can check Query Analytics information and the Service Overview Dashboard for each service by clicking on the icon in the **Options** column. +- Each instance of a service gets a `service_type` attribute so one can clearly tell what type of database it is, for instance: `mysql`, `postgresql`, `mongodb`, etc. -From here you can also check additional information about the service, by clicking on the icon. This expands the service entry to show reference information like service labels and IDs. +- Every service is related to a certain node via its `node_id` attribute. This feature allows to support monitoring of multiple instances on a single node, with different service names, e.g. `mysql1-3306`, and `mysql1-3307`. -![!image](../../_images/PMM_Inventory_Service_Selection.png) +- Starting with PMM 2.41.0, each instance of a service gets a `version` attribute to the response of the endpoint that provides a list of services being monitored by PMM. This makes it easy to visualize the database server version. + + However, following are the imitations: + + - The version is not captured for the internal PostgreSQL database. + - The version is only captured when a new service is being added to PMM and the agent installed on the client side is equal to or greater than v2.41.0. + - When a database is upgraded, you will not see the database version updated automatically. It will be updated if you remove and then re-add the service. -Each instance of a service gets a `service_type` attribute so one can clearly tell what type of database it is, for instance: `mysql`, `postgresql`, `mongodb`, etc. Every service is related to a certain node via its `node_id` attribute. This feature allows to support multiple instances on a single node, with different service names, e.g. `mysql1-3306`, and `mysql1-3307`. +#### Agents -Each binary (exporter, agent) running on a client will get an `agent_type` value. Examples: +Each binary (exporter, agent) running on a client will get an `agent_type` value. + +Example - `pmm-agent` is at the top of the tree, assigned to PMM Agent itself - `node_exporter` is assigned to an agent that extracts the node metrics @@ -41,6 +55,7 @@ To view the agents running on a service and their health status, click **OK** or ![!image](../../_images/PMM_Inventory_Service_Agent_Properties.png) + #### Node-service relationship Starting with PMM 2.40.0, you can click on the link in the **Node Name** column to view the node on which a specific service is running and analyze how node-level resource utilization impacts the performance of those services. @@ -103,6 +118,9 @@ Click the downward arrow to view cluster details, including the services running ![!image](../../_images/PMM_Inventory_cluster_view_details.png) +Furthermore, you can filter the clusters by criteria such as Cluster name, Status, Service name, Node name, Monitoring, Address, and Port. + +![!image](../../_images/PMM_Inventory_cluster_view_filter.png) ### Nodes tab diff --git a/docs/get-started/query-analytics.md b/docs/get-started/query-analytics.md index bc79e59ed5..7e5dcadb7b 100644 --- a/docs/get-started/query-analytics.md +++ b/docs/get-started/query-analytics.md @@ -18,7 +18,7 @@ The dashboard contains three panels: - the [Details Panel](#details-panel). !!! note alert alert-primary "" - Query Analytics data retrieval is not instantaneous and can be delayed due to network conditions. In such situations *no data* is reported and a gap appears in the sparkline. + Query Analytics data retrieval may experience delays due to network conditions. As a result, a small amount of data (up to 1 hour) will be buffered in the memory and reported when the connection is restored. ## Filters Panel diff --git a/docs/how-to/HA.md b/docs/how-to/HA.md new file mode 100644 index 0000000000..df7f218dee --- /dev/null +++ b/docs/how-to/HA.md @@ -0,0 +1,677 @@ +# Set up PMM in HA mode + +!!! caution alert alert-warning "Important" + This feature has been added in PMM 2.41.0 and is currently in [Technical Preview](https://docs.percona.com/percona-monitoring-and-management/details/glossary.html#technical-preview). Early adopters are advised to use this feature for testing purposes only as it is subject to change. + +Set up PMM using Docker containers in a high-availability (HA) configuration following these instructions. + +PMM Server is deployed in a high-availability setup where three PMM Server instances are configured, one being the leader and others are followers. These servers provide services including: + +- ClickHouse: A fast, open-source analytical database. +- VictoriaMetrics: A scalable, long-term storage solution for time series data. +- PostgreSQL: A powerful open-source relational database management system, used in this setup to store PMM data like inventory, settings, and other feature-related data. + +## Importance of HA + +Having high availability increases the reliability of the PMM service, as the leader server handles all client requests, and subsequent servers take over if the leader fails. + +- Gossip Protocol: This protocol facilitates PMM servers to discover and share information about their states with each other. It is used for managing the PMM server list and failure detection. +- Raft Protocol: This is a consensus algorithm that allows PMM servers to agree on a leader and ensures that logs are replicated among all machines. + +## Prerequisites + +You will need the following before you can begin the deployment: + +- Docker installed and configured on your system. If you haven't installed Docker, you can follow **[this guide](https://docs.docker.com/get-docker/)**. + +## Procedure to set up PMM in HA mode + +!!! note alert alert-primary "Note" + - The sections below provide instructions for setting up the services on both the same and separate instances. However, it is not recommended to run the services on a single machine for production purposes. This approach is only recommended for the development environment. + - It is recommended to use clustered versions of PosgreSQL, Victoriametrics, Clickhouse, etc., instead of standalone versions when setting up the services. + +The steps to set up PMM in HA mode are: + +### **Step 1: Define environment variables** + +Before you start with the setup, define the necessary environment variables on each instance where the services will be running. These variables will be used in subsequent commands. + +For all IP addresses, use the format `17.10.1.x`, and for all usernames and passwords, use a string format like `example`. + + +| **Variable** | **Description** +| ------------------------------------------------| ------------------------------------------------------------------------------------------------------------------------------- +| `CH_HOST_IP` | The IP address of the instance where the ClickHouse service is running or the desired IP address for the ClickHouse container within the Docker network, depending on your setup.

Example: `17.10.1.2` +| `VM_HOST_IP` | The IP address of the instance where the VictoriaMetrics service is running or the desired IP address for the VictoriaMetrics container within the Docker network, depending on your setup.

Example: `17.10.1.3` +| `PG_HOST_IP` | The IP address of the instance where the PostgreSQL service is running or the desired IP address for the PostgreSQL container within the Docker network, depending on your setup.

Example: `17.10.1.4` +| `PG_USERNAME` | The username for your PostgreSQL server.

Example: `pmmuser` +| `PG_PASSWORD` | The password for your PostgreSQL server.

Example: `pgpassword` +| `GF_USERNAME` | The username for your Grafana database user.

Example: `gfuser` +| `GF_PASSWORD` | The password for your Grafana database user.

Example: `gfpassword` +| `PMM_ACTIVE_IP` | The IP address of the instance where the active PMM server is running or the desired IP address for your active PMM server container within the Docker network, depending on your setup.

Example: `17.10.1.5` +| `PMM_ACTIVE_NODE_ID` | The unique ID for your active PMM server node.

Example: `pmm-server-active` +| `PMM_PASSIVE_IP` | The IP address of the instance where the first passive PMM server is running or the desired IP address for your first passive PMM server container within the Docker network, depending on your setup.

Example: `17.10.1.6` +| `PMM_PASSIVE_NODE_ID` | The unique ID for your first passive PMM server node.

Example: `pmm-server-passive` +| `PMM_PASSIVE2_IP` | The IP address of the instance where the second passive PMM server is running or the desired IP address for your second passive PMM server container within the Docker network, depending on your setup.

Example: `17.10.1.7` +| `PMM_PASSIVE2_NODE_ID` | The unique ID for your second passive PMM server node.

Example: `pmm-server-passive2` +| `PMM_DOCKER_IMAGE`         | The specific PMM Server Docker image for this guide.

Example: `percona/pmm-server:2` + + +??? example "Expected output" + + ``` + export CH_HOST_IP=17.10.1.2 + export VM_HOST_IP=17.10.1.3 + export PG_HOST_IP=17.10.1.4 + export PG_USERNAME=pmmuser + export PG_PASSWORD=pgpassword + export GF_USERNAME=gfuser + export GF_PASSWORD=gfpassword + export PMM_ACTIVE_IP=17.10.1.5 + export PMM_ACTIVE_NODE_ID=pmm-server-active + export PMM_PASSIVE_IP=17.10.1.6 + export PMM_PASSIVE_NODE_ID=pmm-server-passive + export PMM_PASSIVE2_IP=17.10.1.7 + export PMM_PASSIVE2_NODE_ID=pmm-server-passive2 + export PMM_DOCKER_IMAGE=percona/pmm-server:2 + ``` + +!!! note alert alert-primary "Note" + Ensure that you have all the environment variables from Step 1 set in each instance where you run these commands. + +### **Step 2: Create Docker network (Optional)** + +1. Set up a Docker network for PMM services if you plan to run all the services on the same instance. As a result of this Docker network, your containers will be able to communicate with each other, which is essential for the High Availability (HA) mode to function properly in PMM. This step may be optional if you run your services on separate instances. + +2. Run the following command to create a Docker network: + + ```sh + docker network create pmm-network --subnet=17.10.1.0/16 + ``` + +### **Step 3: Set up ClickHouse** + +ClickHouse is an open-source column-oriented database management system. In PMM, ClickHouse stores Query Analytics (QAN) metrics, which provide detailed information about your queries. + +To set up ClickHouse: + +1. Pull the ClickHouse Docker image. + + ```sh + docker pull clickhouse/clickhouse-server:23.8.2.7-alpine + ``` + +2. Create a Docker volume for ClickHouse data. + + ```sh + docker volume create ch_data + ``` + +3. Run the ClickHouse container. + + === "Run services on same instance" + + ```sh + docker run -d \ + --name ch \ + --network pmm-network \ + --ip ${CH_HOST_IP} \ + -p 9000:9000 \ + -v ch_data:/var/lib/clickhouse \ + clickhouse/clickhouse-server:23.8.2.7-alpine + ``` + + === "Run services on a seperate instance" + + ```sh + docker run -d \ + --name ch \ + -p 9000:9000 \ + -v ch_data:/var/lib/clickhouse \ + clickhouse/clickhouse-server:23.8.2.7-alpine + ``` + + !!! note alert alert-primary "Note" + - If you run the services on the same instance, the `--network` and `--ip` flags assign a specific IP address to the container within the Docker network created in the previous step. This IP address is referenced in subsequent steps as the ClickHouse service address. + - The `--network` and `--ip` flags are not required if the services are running on separate instances since ClickHouse will bind to the default network interface. + + +### **Step 4: Set up VictoriaMetrics** + +VictoriaMetrics provides a long-term storage solution for your time-series data. In PMM, it is used to store Prometheus metrics. + +To set up VictoriaMetrics: + +1. Pull the VictoriaMetrics Docker image. + + ```sh + docker pull victoriametrics/victoria-metrics:v1.93.4 + ``` + +2. Create a Docker volume for VictoriaMetrics data. + + ```sh + docker volume create vm_data + ``` + +3. Run the VictoriaMetrics container. + + You can either run all the services on the same instance or a separate instance. + + + === "Run services on same instance" + + ```sh + docker run -d \ + --name vm \ + --network pmm-network \ + --ip ${VM_HOST_IP} \ + -p 8428:8428 \ + -p 8089:8089 \ + -p 8089:8089/udp \ + -p 2003:2003 \ + -p 2003:2003/udp \ + -p 4242:4242 \ + -v vm_data:/storage \ + victoriametrics/victoria-metrics:v1.93.4 \ + --storageDataPath=/storage \ + --graphiteListenAddr=:2003 \ + --opentsdbListenAddr=:4242 \ + --httpListenAddr=:8428 \ + --influxListenAddr=:8089 + ``` + + === "Run services on a seperate instance" + + ```sh + docker run -d \ + --name vm \ + -p 8428:8428 \ + -p 8089:8089 \ + -p 8089:8089/udp \ + -p 2003:2003 \ + -p 2003:2003/udp \ + -p 4242:4242 \ + -v vm_data:/storage \ + victoriametrics/victoria-metrics:v1.93.4 \ + --storageDataPath=/storage \ + --graphiteListenAddr=:2003 \ + --opentsdbListenAddr=:4242 \ + --httpListenAddr=:8428 \ + --influxListenAddr=:8089 + ``` + + !!! note alert alert-primary "Note" + - If you run the services on the same instance, the `--network` and `--ip` flags are used to assign a specific IP address to the container within the Docker network created in Step 2. This IP address is referenced in subsequent steps as the VictoriaMetrics service address. + - The `--network` and `--ip` flags are not required if the services are running on separate instances, as VictoriaMetrics will bind to the default network interface. + +### **Step 5: Set up PostgreSQL** + +PostgreSQL is a powerful, open-source object-relational database system. In PMM, it's used to store data related to inventory, settings, and other features. + +To set up PostgreSQL: + +1. Pull the Postgres Docker image. + + ```sh + docker pull postgres:14 + ``` + +2. Create a Docker volume for Postgres data: + + ```bash + docker volume create pg_data + ``` + +3. Create a directory to store init SQL queries: + + ```bash + mkdir -p /path/to/queries + ``` + + Replace `/path/to/queries` with the path where you want to store your `init` SQL queries. + +4. Create an `init.sql.template` file in newly created directory with the following content: + + ```sql + CREATE DATABASE "pmm-managed"; + CREATE USER WITH ENCRYPTED PASSWORD ''; + GRANT ALL PRIVILEGES ON DATABASE "pmm-managed" TO ; + CREATE DATABASE grafana; + CREATE USER WITH ENCRYPTED PASSWORD ''; + GRANT ALL PRIVILEGES ON DATABASE grafana TO ; + + \c pmm-managed + + CREATE EXTENSION IF NOT EXISTS pg_stat_statements; + ``` + + +5. Use **`sed`** to replace the placeholders with the environment variables and write the output to **`init.sql`**. + + ```bash + sed -e 's//'"$PG_USERNAME"'/g' \ + -e 's//'"$PG_PASSWORD"'/g' \ + -e 's//'"$GF_USERNAME"'/g' \ + -e 's//'"$GF_PASSWORD"'/g' \ + init.sql.template > init.sql + ``` + +6. Run the PostgreSQL container. + + You can either run all the services on the same instance or on a seperate instance. + + !!! note alert alert-primary "Note" + It is recommended to use absolute paths instead of relative paths for volume mounts. + + + === "Run services on same instance" + + ```sh + docker run -d \ + --name pg \ + --network pmm-network \ + --ip ${PG_HOST_IP} \ + -p 5432:5432 \ + -e POSTGRES_PASSWORD=${PG_PASSWORD} \ + -v /path/to/queries:/docker-entrypoint-initdb.d/ \ + -v pg_data:/var/lib/postgresql/data \ + postgres:14 \ + postgres -c shared_preload_libraries=pg_stat_statements + ``` + + === "Run services on a seperate instance" + + ```sh + docker run -d \ + --name pg \ + -p 5432:5432 \ + -e POSTGRES_PASSWORD=${PG_PASSWORD} \ + -v /path/to/queries:/docker-entrypoint-initdb.d \ + -v pg_data:/var/lib/postgresql/data \ + postgres:14 \ + postgres -c shared_preload_libraries=pg_stat_statements + ``` + + Replace **`/path/to/queries`** with the path to your **`init.sql`** file. This command mounts the **`init.sql`** file to the **`docker-entrypoint-initdb.d`** directory, which is automatically executed upon container startup. + + + !!! note alert alert-primary "Note" + - If you run the services on the same instance, the `--network` and `--ip` flags are used to assign a specific IP address to the container within the Docker network created in Step 2. This IP address is referenced in subsequent steps as the PostgreSQL service address. + - The `--network` and `--ip` flags are not required if the services are running on separate instances, as PostgreSQL will bind to the default network interface. + +### **Step 6: Running PMM Services** + +The PMM server orchestrates the collection, storage, and visualization of metrics. In our high-availability setup, we'll have one active PMM server and two passive PMM servers. + +1. Pull the PMM Server Docker image: + + ```bash + docker pull ${PMM_DOCKER_IMAGE} + ``` + +2. Create a Docker volume for PMM-Server data: + + ```bash + docker volume create pmm-server-active_data + docker volume create pmm-server-passive_data + docker volume create pmm-server-passive-2_data + ``` + +3. Run the active PMM managed server. This server will serve as the primary monitoring server. + + You can either run all the services on the same instance or a separate instance. + + === "Run services on same instance" + + ```sh + docker run -d \ + --name ${PMM_ACTIVE_NODE_ID} \ + --hostname ${PMM_ACTIVE_NODE_ID} \ + --network pmm-network \ + --ip ${PMM_ACTIVE_IP} \ + -e PERCONA_TEST_PMM_DISABLE_BUILTIN_CLICKHOUSE=1 \ + -e PERCONA_TEST_PMM_DISABLE_BUILTIN_POSTGRES=1 \ + -e PERCONA_TEST_PMM_CLICKHOUSE_ADDR=${CH_HOST_IP}:9000 \ + -e PERCONA_TEST_PMM_CLICKHOUSE_DATABASE=pmm \ + -e PERCONA_TEST_PMM_CLICKHOUSE_BLOCK_SIZE=10000 \ + -e PERCONA_TEST_PMM_CLICKHOUSE_POOL_SIZE=2 \ + -e PERCONA_TEST_POSTGRES_ADDR=${PG_HOST_IP}:5432 \ + -e PERCONA_TEST_POSTGRES_USERNAME=${PG_USERNAME} \ + -e PERCONA_TEST_POSTGRES_DBPASSWORD=${PG_PASSWORD} \ + -e GF_DATABASE_URL=postgres://${GF_USERNAME}:${GF_PASSWORD}@${PG_HOST_IP}:5432/grafana \ + -e PMM_VM_URL=http://${VM_HOST_IP}:8428 \ + -e PMM_TEST_HA_ENABLE=1 \ + -e PMM_TEST_HA_BOOTSTRAP=1 \ + -e PMM_TEST_HA_NODE_ID=${PMM_ACTIVE_NODE_ID} \ + -e PMM_TEST_HA_ADVERTISE_ADDRESS=${PMM_ACTIVE_IP} \ + -e PMM_TEST_HA_GOSSIP_PORT=9096 \ + -e PMM_TEST_HA_RAFT_PORT=9097 \ + -e PMM_TEST_HA_GRAFANA_GOSSIP_PORT=9094 \ + -e PMM_TEST_HA_PEERS=${PMM_ACTIVE_IP},${PMM_PASSIVE_IP},${PMM_PASSIVE2_IP} \ + -v pmm-server-active_data:/srv \ + ${PMM_DOCKER_IMAGE} + ``` + + === "Run services on a seperate instance" + + ```sh + docker run -d \ + --name ${PMM_ACTIVE_NODE_ID} \ + -p 80:80 \ + -p 443:443 \ + -p 9094:9094 \ + -p 9096:9096 \ + -p 9094:9094/udp \ + -p 9096:9096/udp \ + -p 9097:9097 \ + -e PERCONA_TEST_PMM_DISABLE_BUILTIN_CLICKHOUSE=1 \ + -e PERCONA_TEST_PMM_DISABLE_BUILTIN_POSTGRES=1 \ + -e PERCONA_TEST_PMM_CLICKHOUSE_ADDR=${CH_HOST_IP}:9000 \ + -e PERCONA_TEST_PMM_CLICKHOUSE_DATABASE=pmm \ + -e PERCONA_TEST_PMM_CLICKHOUSE_BLOCK_SIZE=10000 \ + -e PERCONA_TEST_PMM_CLICKHOUSE_POOL_SIZE=2 \ + -e PERCONA_TEST_POSTGRES_ADDR=${PG_HOST_IP}:5432 \ + -e PERCONA_TEST_POSTGRES_USERNAME=${PG_USERNAME} \ + -e PERCONA_TEST_POSTGRES_DBPASSWORD=${PG_PASSWORD} \ + -e GF_DATABASE_URL=postgres://${GF_USERNAME}:${GF_PASSWORD}@${PG_HOST_IP}:5432/grafana \ + -e PMM_VM_URL=http://${VM_HOST_IP}:8428 \ + -e PMM_TEST_HA_ENABLE=1 \ + -e PMM_TEST_HA_BOOTSTRAP=1 \ + -e PMM_TEST_HA_NODE_ID=${PMM_ACTIVE_NODE_ID} \ + -e PMM_TEST_HA_ADVERTISE_ADDRESS=${PMM_ACTIVE_IP} \ + -e PMM_TEST_HA_GOSSIP_PORT=9096 \ + -e PMM_TEST_HA_RAFT_PORT=9097 \ + -e PMM_TEST_HA_GRAFANA_GOSSIP_PORT=9094 \ + -e PMM_TEST_HA_PEERS=${PMM_ACTIVE_IP},${PMM_PASSIVE_IP},${PMM_PASSIVE2_IP} \ + -v pmm-server-active_data:/srv \ + ${PMM_DOCKER_IMAGE} + ``` + +4. Run the first passive PMM managed server. This server will act as a standby server, ready to take over if the active server fails. + + You can either run all the services on the same instance or a separate instance. + + === "Run services on same instance" + + ```sh + docker run -d \ + --name ${PMM_PASSIVE_NODE_ID} \ + --hostname ${PMM_PASSIVE_NODE_ID} \ + --network pmm-network \ + --ip ${PMM_PASSIVE_IP} \ + -e PERCONA_TEST_PMM_DISABLE_BUILTIN_CLICKHOUSE=1 \ + -e PERCONA_TEST_PMM_DISABLE_BUILTIN_POSTGRES=1 \ + -e PERCONA_TEST_PMM_CLICKHOUSE_ADDR=${CH_HOST_IP}:9000 \ + -e PERCONA_TEST_PMM_CLICKHOUSE_DATABASE=pmm \ + -e PERCONA_TEST_PMM_CLICKHOUSE_BLOCK_SIZE=10000 \ + -e PERCONA_TEST_PMM_CLICKHOUSE_POOL_SIZE=2 \ + -e PERCONA_TEST_POSTGRES_ADDR=${PG_HOST_IP}:5432 \ + -e PERCONA_TEST_POSTGRES_USERNAME=${PG_USERNAME} \ + -e PERCONA_TEST_POSTGRES_DBPASSWORD=${PG_PASSWORD} \ + -e GF_DATABASE_URL=postgres://${GF_USERNAME}:${GF_PASSWORD}@${PG_HOST_IP}:5432/grafana \ + -e PMM_VM_URL=http://${VM_HOST_IP}:8428 \ + -e PMM_TEST_HA_ENABLE=1 \ + -e PMM_TEST_HA_BOOTSTRAP=0 \ + -e PMM_TEST_HA_NODE_ID=${PMM_PASSIVE_NODE_ID} \ + -e PMM_TEST_HA_ADVERTISE_ADDRESS=${PMM_PASSIVE_IP} \ + -e PMM_TEST_HA_GOSSIP_PORT=9096 \ + -e PMM_TEST_HA_RAFT_PORT=9097 \ + -e PMM_TEST_HA_GRAFANA_GOSSIP_PORT=9094 \ + -e PMM_TEST_HA_PEERS=${PMM_ACTIVE_IP},${PMM_PASSIVE_IP},${PMM_PASSIVE2_IP} \ + -v pmm-server-passive_data:/srv \ + ${PMM_DOCKER_IMAGE} + ``` + + === "Run services on a seperate instance" + + ```sh + docker run -d \ + --name ${PMM_PASSIVE_NODE_ID} \ + -p 80:80 \ + -p 443:443 \ + -p 9094:9094 \ + -p 9096:9096 \ + -p 9094:9094/udp \ + -p 9096:9096/udp \ + -p 9097:9097 \ + -e PERCONA_TEST_PMM_DISABLE_BUILTIN_CLICKHOUSE=1 \ + -e PERCONA_TEST_PMM_DISABLE_BUILTIN_POSTGRES=1 \ + -e PERCONA_TEST_PMM_CLICKHOUSE_ADDR=${CH_HOST_IP}:9000 \ + -e PERCONA_TEST_PMM_CLICKHOUSE_DATABASE=pmm \ + -e PERCONA_TEST_PMM_CLICKHOUSE_BLOCK_SIZE=10000 \ + -e PERCONA_TEST_PMM_CLICKHOUSE_POOL_SIZE=2 \ + -e PERCONA_TEST_POSTGRES_ADDR=${PG_HOST_IP}:5432 \ + -e PERCONA_TEST_POSTGRES_USERNAME=${PG_USERNAME} \ + -e PERCONA_TEST_POSTGRES_DBPASSWORD=${PG_PASSWORD} \ + -e GF_DATABASE_URL=postgres://${GF_USERNAME}:${GF_PASSWORD}@${PG_HOST_IP}:5432/grafana \ + -e PMM_VM_URL=http://${VM_HOST_IP}:8428 \ + -e PMM_TEST_HA_ENABLE=1 \ + -e PMM_TEST_HA_BOOTSTRAP=0 \ + -e PMM_TEST_HA_NODE_ID=${PMM_PASSIVE_NODE_ID} \ + -e PMM_TEST_HA_ADVERTISE_ADDRESS=${PMM_PASSIVE_IP} \ + -e PMM_TEST_HA_GOSSIP_PORT=9096 \ + -e PMM_TEST_HA_RAFT_PORT=9097 \ + -e PMM_TEST_HA_GRAFANA_GOSSIP_PORT=9094 \ + -e PMM_TEST_HA_PEERS=${PMM_ACTIVE_IP},${PMM_PASSIVE_IP},${PMM_PASSIVE2_IP} \ + -v pmm-server-passive_data:/srv \ + ${PMM_DOCKER_IMAGE} + ``` + +5. Run the second passive PMM managed server. Like the first passive server, this server will also act as a standby server. + + You can either run all the services on the same instance or a separate instance. + + === "Run services on same instance" + + ```sh + docker run -d \ + --name ${PMM_PASSIVE2_NODE_ID} \ + --hostname ${PMM_PASSIVE2_NODE_ID} \ + --network pmm-network \ + --ip ${PMM_PASSIVE2_IP} \ + -e PERCONA_TEST_PMM_DISABLE_BUILTIN_CLICKHOUSE=1 \ + -e PERCONA_TEST_PMM_DISABLE_BUILTIN_POSTGRES=1 \ + -e PERCONA_TEST_PMM_CLICKHOUSE_ADDR=${CH_HOST_IP}:9000 \ + -e PERCONA_TEST_PMM_CLICKHOUSE_DATABASE=pmm \ + -e PERCONA_TEST_PMM_CLICKHOUSE_BLOCK_SIZE=10000 \ + -e PERCONA_TEST_PMM_CLICKHOUSE_POOL_SIZE=2 \ + -e PERCONA_TEST_POSTGRES_ADDR=${PG_HOST_IP}:5432 \ + -e PERCONA_TEST_POSTGRES_USERNAME=${PG_USERNAME} \ + -e PERCONA_TEST_POSTGRES_DBPASSWORD=${PG_PASSWORD} \ + -e GF_DATABASE_URL=postgres://${GF_USERNAME}:${GF_PASSWORD}@${PG_HOST_IP}:5432/grafana \ + -e PMM_VM_URL=http://${VM_HOST_IP}:8428 \ + -e PMM_TEST_HA_ENABLE=1 \ + -e PMM_TEST_HA_BOOTSTRAP=0 \ + -e PMM_TEST_HA_NODE_ID=${PMM_PASSIVE2_NODE_ID} \ + -e PMM_TEST_HA_ADVERTISE_ADDRESS=${PMM_PASSIVE2_IP} \ + -e PMM_TEST_HA_GOSSIP_PORT=9096 \ + -e PMM_TEST_HA_RAFT_PORT=9097 \ + -e PMM_TEST_HA_GRAFANA_GOSSIP_PORT=9094 \ + -e PMM_TEST_HA_PEERS=${PMM_ACTIVE_IP},${PMM_PASSIVE_IP},${PMM_PASSIVE2_IP} \ + -v pmm-server-passive-2_data:/srv \ + ${PMM_DOCKER_IMAGE} + ``` + + === "Run services on a seperate instance" + + ```sh + docker run -d \ + --name ${PMM_PASSIVE2_NODE_ID} \ + -p 80:80 \ + -p 443:443 \ + -p 9094:9094 \ + -p 9096:9096 \ + -p 9094:9094/udp \ + -p 9096:9096/udp \ + -p 9097:9097 \ + -e PERCONA_TEST_PMM_DISABLE_BUILTIN_CLICKHOUSE=1 \ + -e PERCONA_TEST_PMM_DISABLE_BUILTIN_POSTGRES=1 \ + -e PERCONA_TEST_PMM_CLICKHOUSE_ADDR=${CH_HOST_IP}:9000 \ + -e PERCONA_TEST_PMM_CLICKHOUSE_DATABASE=pmm \ + -e PERCONA_TEST_PMM_CLICKHOUSE_BLOCK_SIZE=10000 \ + -e PERCONA_TEST_PMM_CLICKHOUSE_POOL_SIZE=2 \ + -e PERCONA_TEST_POSTGRES_ADDR=${PG_HOST_IP}:5432 \ + -e PERCONA_TEST_POSTGRES_USERNAME=${PG_USERNAME} \ + -e PERCONA_TEST_POSTGRES_DBPASSWORD=${PG_PASSWORD} \ + -e GF_DATABASE_URL=postgres://${GF_USERNAME}:${GF_PASSWORD}@${PG_HOST_IP}:5432/grafana \ + -e PMM_VM_URL=http://${VM_HOST_IP}:8428 \ + -e PMM_TEST_HA_ENABLE=1 \ + -e PMM_TEST_HA_BOOTSTRAP=0 \ + -e PMM_TEST_HA_NODE_ID=${PMM_PASSIVE2_NODE_ID} \ + -e PMM_TEST_HA_ADVERTISE_ADDRESS=${PMM_PASSIVE2_IP} \ + -e PMM_TEST_HA_GOSSIP_PORT=9096 \ + -e PMM_TEST_HA_RAFT_PORT=9097 \ + -e PMM_TEST_HA_GRAFANA_GOSSIP_PORT=9094 \ + -e PMM_TEST_HA_PEERS=${PMM_ACTIVE_IP},${PMM_PASSIVE_IP},${PMM_PASSIVE2_IP} \ + -v /srv/pmm-data:/srv \ + ${PMM_DOCKER_IMAGE} + ``` + + + !!! note alert alert-primary "Note" + + - Ensure to set the environment variables from Step 1 in each instance where you run these commands. + - If you run the service on the same instance, remove the **`-p`** flags. + - If you run the service on a separate instance, remove the **`--network`** and **`--ip`** flags. + + +### **Step 7: Running HAProxy** + +HAProxy provides high availability for your PMM setup by directing traffic to the current leader server via the `/v1/leaderHealthCheck` endpoint. + + +1. Pull the HAProxy Docker image. + + ```bash + docker pull haproxy:2.4.2-alpine + ``` + +2. Create a directory to store the SSL certificate. + + ```bash + mkdir -p /path/to/certs + ``` + + Replace `/path/to/certs` with the path where you want to store your SSL certificates. + +3. Navigate to this directory and generate a new private key. + + ```bash + openssl genrsa -out pmm.key 2048 + ``` + + This command generates a 2048-bit RSA private key and saves it to a file named `pmm.key`. + +4. Using the private key, generate a self-signed certificate. + + ```bash + openssl req -new -x509 -key pmm.key -out pmm.crt -days 365 + ``` + + Enter country, state, organization name, etc. when asked. Use `-days 365` option for 365-day certificate validity. + +5. Copy your SSL certificate and private key to the directory you created in step 2. Ensure that the certificate file is named `pmm.crt` and the private key file is named `pmm.key`. + + Concatenate these two files to create a PEM file: + + ```bash + cat pmm.crt pmm.key > pmm.pem + ``` + +6. Create a directory to store HA Proxy configuration. + + ```bash + mkdir -p /path/to/haproxy-config + ``` + + Replace `/path/to/haproxy-config` with the path where you want to store your HAProxy configuration. + +7. Create an HAProxy configuration file named `haproxy.cfg.template` in that directory. This configuration tells HAProxy to use the `/v1/leaderHealthCheck` endpoint of each PMM server to identify the leader. + + ``` + global + log stdout local0 debug + log stdout local1 info + log stdout local2 info + daemon + + defaults + log global + mode http + option httplog + option dontlognull + timeout connect 5000 + timeout client 50000 + timeout server 50000 + + frontend http_front + bind *:80 + default_backend http_back + + frontend https_front + bind *:443 ssl crt /etc/haproxy/certs/pmm.pem + default_backend https_back + + backend http_back + option httpchk + http-check send meth POST uri /v1/leaderHealthCheck ver HTTP/1.1 hdr Host www + http-check expect status 200 + server pmm-server-active-http PMM_ACTIVE_IP:80 check + server pmm-server-passive-http PMM_PASSIVE_IP:80 check backup + server pmm-server-passive-2-http PMM_PASSIVE2_IP:80 check backup + + backend https_back + option httpchk + http-check send meth POST uri /v1/leaderHealthCheck ver HTTP/1.1 hdr Host www + http-check expect status 200 + server pmm-server-active-https PMM_ACTIVE_IP:443 check ssl verify none + server pmm-server-passive-https PMM_PASSIVE_IP:443 check ssl verify none + server pmm-server-passive-2-https PMM_PASSIVE2_IP:443 check ssl verify none + ``` + +8. Before starting the HAProxy container, use `sed` to replace the placeholders in `haproxy.cfg.template` with the environment variables, and write the output to `haproxy.cfg`. + + ```bash + sed -e "s/PMM_ACTIVE_IP/$PMM_ACTIVE_IP/g" \ + -e "s/PMM_PASSIVE_IP/$PMM_PASSIVE_IP/g" \ + -e "s/PMM_PASSIVE2_IP/$PMM_PASSIVE2_IP/g" \ + /path/to/haproxy.cfg.template > /path/to/haproxy.cfg + ``` + +9. Run the HAProxy container. + + ```bash + docker run -d \ + --name haproxy \ + --network pmm-network \ + -p 80:80 \ + -p 443:443 \ + -v /path/to/haproxy-config:/usr/local/etc/haproxy \ + -v /path/to/certs:/etc/haproxy/certs \ + haproxy:2.4.2-alpine + ``` + + Replace `/path/to/haproxy-config` with the path to the `haproxy.cfg` file you created in step 6, and `/path/to/certs` with the path to the directory containing the SSL certificate and private key. + +!!! note alert alert-primary "Note" + - It is recommended to use absolute paths instead of relative paths for volume mounts. + - If you're running services on separate instances, you can remove the `--network` flag. + +HAProxy is now configured to redirect traffic to the leader PMM managed server. This ensures highly reliable service by redirecting requests to the remainder of the servers in the event that the leader server goes down. + +### **Step 8: Accessing PMM** + +You can access the PMM web interface via HAProxy once all the components are set up and configured: + +1. Access the PMM services by navigating to `https://` in your web browser. Replace `` with the IP address or hostname of the machine running the HAProxy container. +2. You should now see the PMM login screen. Log in using the default credentials, unless you changed them during setup. +3. You can use the PMM web interface to monitor your database infrastructure, analyze metrics, and perform various database management tasks. + +When you register PMM Clients, you must use the HAProxy IP address (or hostname) rather than the PMM Server address once your PMM environment has been set up in high-availability (HA) mode. Even if one PMM server becomes unavailable, clients will still be able to communicate with the servers. + +You have now successfully set up PMM in HA mode using Docker containers. Your PMM environment is more resilient to failures and can continue providing monitoring services if any of the instances fail. + + +!!! note alert alert-primary "Note" + Ensure that all containers are running and accessible. You can use `docker ps` to check the status of your Docker containers. If a container is not running, you can view its logs using the command `docker logs ` to investigate the issue. \ No newline at end of file diff --git a/docs/how-to/PMM_dump.md b/docs/how-to/PMM_dump.md new file mode 100644 index 0000000000..a260c7ba13 --- /dev/null +++ b/docs/how-to/PMM_dump.md @@ -0,0 +1,38 @@ +# Export PMM data with PMM Dump + +PMM data dumps are compressed tarball files containing a comprehensive export of your PMM metrics and QAN data collected by PMM Server. + +You can download these dataset files locally, or share them with Percona Support via an SFTP server. This enables you to share PMM data securely, which is especially useful when you need you troubleshoot PMM issues without without providing access to your PMM instance. + +Starting with 2.41, PMM enables you to generate PMM Datasets straight from PMM. If you are using an older PMM version, you can use the [standalone PMM Dump utility](https://docs.percona.com/pmm-dump-documentation/installation.html) instead. + +## Dump contents + +The **dump.tar.gz** dump file is a .TAR archive compressed via Gzip. Here's what's inside the folders it contains: + + - **meta.json**: metadata about the data dump + - **vm**: Victoria Metrics data chunks in native VM format, organized by timeframe + - **ch**: Query Analytics (QAN) data stored in ClickHouse, organized by rows count + - **log.json**: logs detailing the export and archive creation process + +## Create a data dump + +To create a dump of your dataset: + +1. From the main menu on the left, go to **Help > PMM Dump**. +2. Click **Create dataset** to go to the **Export new dataset** page. +3. Choose the service for which you want to create the dataset or leave it empty to export all data. +4. Define the time range for the dataset. +5. Enable **Export QAN** to include Query Analytics (QAN) metrics alongside the core metrics. +6. Enable **Ignore load** to export the dump bypassing the default resource limit restrictions. +7. Click **Create dataset**. This will generate a data dump file and automatically record an entry in the PMM Dump table. From there, you can use the options available in the **Options** menu to send the dump file to Percona Support or download it locally for internal usage. + +## Send a data dump to Percona Support + +If you are a Percona Customer, you can securely share PMM data dumps with Percona Support via SFTP. + +1. From the main menu on the left, go to **Help > PMM Dump**. +2. Select the PMM dump entry which you want to send to Support. +3. In the **Options** column, expand the table row to check the PMM Service associated with the dataset, click the ellipsis (three vertical dots) and select **Send to Support**. +4. Fill in the [details of the SFTP server](https://percona.service-now.com/percona?id=kb_article_view&sysparm_article=KB0010247&sys_kb_id=bebd04da87e329504035b8c9cebb35a7&spa=1), then click **Send**. +5. Update your Support ticket to let Percona know that you've uploaded the dataset on the SFTP server. diff --git a/docs/how-to/index.md b/docs/how-to/index.md index 3e0056a948..6a307e3d9f 100644 --- a/docs/how-to/index.md +++ b/docs/how-to/index.md @@ -5,6 +5,7 @@ - [Upgrade](upgrade.md) PMM Server via the user interface. - [Secure](secure.md) your PMM installation. - [Optimize](optimize.md) the performance of your PMM installation. +- [Set up PMM in HA mode](HA.md). - [Annotate](annotate.md) charts to mark significant events. - [Share dashboards and panels](share-dashboard.md) to save or share. - [Extend Metrics](extend-metrics.md) with textfile collector. diff --git a/docs/release-notes/2.41.0.md b/docs/release-notes/2.41.0.md new file mode 100644 index 0000000000..3d85b1d8c9 --- /dev/null +++ b/docs/release-notes/2.41.0.md @@ -0,0 +1,75 @@ + +# Percona Monitoring and Management 2.41.0 + + +| **Release date** | Dec 12, 2023 | +| ----------------- | ----------------------------------------------------------------------------------------------- | +| **Installation** | [Installing Percona Monitoring and Management](https://www.percona.com/software/pmm/quickstart) | + +Percona Monitoring and Management (PMM) is an open source database monitoring, management, and observability solution for MySQL, PostgreSQL, and MongoDB. + + + +## Release Highlights + + +### Streamlined database problem reporting to Percona + +To improve the gathering and sharing of PMM metrics and data, we’ve now integrated the [pmm_dump client utility](https://docs.percona.com/pmm-dump-documentation/index.html) into PMM. Initially a standalone client for PMM Server, PMM Dump is now accessible in the PMM user interface. + +This integration enables you to collect PMM data to share with our Support team. + +To get started, in the main menu, go to **Help** > **PMM Dump** and select either to export a dataset locally or upload it to our SFTP servers using the credentials generated through your Percona Support ticket. + +### PostgreSQL monitoring: optimizing performance + +PMM 2.41.0 introduces limit for Auto-discovery in PostgreSQL, a feature that dynamically discovers all databases in your PostgreSQL instance. Limiting Auto-discovery reduces connections and prevents high CPU and RAM usage caused by multiple databases, thus optimizing performance. + +![!](../_images/PMM_Add_Instance_PostgreSQL_autodiscovery_enabled.png) + +For details, see [documentation](https://docs.percona.com/percona-monitoring-and-management/setting-up/client/postgresql.html#auto-discovery-limit). + +### PMM DBaaS functionality evolution into Percona Everest + +We have decided to separate our DBaaS offering into an independent product. Consequently, we are discontinuing the DBaaS functionality in PMM and offering a [migration path to Everest](http://per.co.na/pmm-to-everest-guide). + +While the DBaaS functionality will remain available in PMM versions 2.x, all future updates and enhancements will be exclusively accessible through the Percona Everest interface. +For a more streamlined and robust database deployment experience, try [Percona Everest](http://per.co.na/pmm-to-everest). + +## New Features + +- [PMM-12459](https://jira.percona.com/browse/PMM-12459) - The [pmm_dump client utility](https://docs.percona.com/pmm-dump-documentation/index.html) previously available as a standalone client for PMM Server is now readily accessible within the PMM user interface. + +## Improvements + +- [PMM-11341](https://jira.percona.com/browse/PMM-11341) - PMM 2.41.0 introduces limit for Auto-discovery in PostgreSQL, a feature that dynamically discovers all databases in your PostgreSQL instance. Limiting Auto-discovery reduces connections and prevents high CPU and RAM usage caused by multiple databases, thus optimizing performance. +- [PMM-12375](https://jira.percona.com/browse/PMM-12375) - Starting with PMM 2.41.0, each instance of a service gets a `version` attribute in the PMM Inventory UI. +- [PMM-12422](https://jira.percona.com/browse/PMM-12422) - PMM 2.41.0 introduces a new flag called `--expose-exporter`. When you enable this flag any IP address, either from a local system or from anywhere on the internet, can access exporter endpoints. If the flag is not enabled, the exporter will be available only locally. +- [PMM-12544](https://jira.percona.com/browse/PMM-12544) - Added deprecation notices to the PMM documentation DBaaS pages. For a more streamlined and robust database deployment experience, try [Percona Everest](http://per.co.na/pmm-to-everest). +- [PMM-12549](https://jira.percona.com/browse/PMM-12549) - Added support for the latest MongoDB version. You can now use PMM to monitor MongoDB 7 databases. + + +## Components upgrade + +- [PMM-12154](https://jira.percona.com/browse/PMM-12154) - Updated `postgres_exporter` to version [0.14.0](https://github.com/prometheus-community/postgres_exporter/releases). With this update, we have resolved several performance issues and eliminated the creation of multiple connections per database. +- [PMM-12223](https://jira.percona.com/browse/PMM-12223) - Clickhouse has been updated to version 23.8.2.7, which optimizes memory and CPU usage to improve system performance. + + +## Bugs Fixed + +- [PMM-4712](https://jira.percona.com/browse/PMM-4712) - We have addressed the issue where the [pprof](https://github.com/google/pprof) heap reports for postgres_exporter were missing. +- [PMM-12626](https://jira.percona.com/browse/PMM-12626) - Due to the packages being built on an outdated Go version, there was a potential vulnerability. We have updated Go to the latest version to mitigate this risk. +- [PMM-12414](https://jira.percona.com/browse/PMM-12414) - Fixed an issue with an unexpected behavior (502 response) when accessing the `logs.zip` endpoint. This was caused by the `group_by` parameter being included in the Alertmanager configuration. Additionally, we removed AlertManager-related files from `logs.zip` since we stopped using AlertManager. +- [PMM-11714](https://jira.percona.com/browse/PMM-11714) - Registering a node with the Grafana Admin flag enabled but a non-admin role was failing. This issue has now been resolved. +- [PMM-12660](https://jira.percona.com/browse/PMM-12660) - Prior to version 2.41.0 of PMM, the endpoint `/v1/management/Agent/List` could deliver database certificates to the PMM UI, allowing an authenticated admin user to view the output of TLS certificates. This posed a security issue since certificates should be consumed by the backend only. We have resolved this issue now. +- [PMM-12630](https://jira.percona.com/browse/PMM-12630) - When users attempted to upgrade PMM versions lower than or equal to 2.37.1, the upgrade process got stuck in a loop and failed. The issue has been resolved now. +- [PMM-12725](https://jira.percona.com/browse/PMM-12725) - Fixed the pagination for QAN. +- [PMM-12658](https://jira.percona.com/browse/PMM-12658) - Corrected a typo in the MongoDB cluster summary dashboard. diff --git a/docs/release-notes/index.md b/docs/release-notes/index.md index 40efd901e4..4e460a3e75 100644 --- a/docs/release-notes/index.md +++ b/docs/release-notes/index.md @@ -1,4 +1,5 @@ # Release Notes +- [Percona Monitoring and Management 2.41.0](2.41.0.md) - [Percona Monitoring and Management 2.40.1](2.40.1.md) - [Percona Monitoring and Management 2.40.0](2.40.0.md) - [Percona Monitoring and Management 2.39.0](2.39.0.md) diff --git a/docs/setting-up/client/index.md b/docs/setting-up/client/index.md index 36bd0a1028..62956ae2c1 100644 --- a/docs/setting-up/client/index.md +++ b/docs/setting-up/client/index.md @@ -33,7 +33,9 @@ Here's an overview of the choices. - If using it, install [Docker]. - System requirements: - Operating system -- PMM Client runs on any modern 64-bit Linux distribution. It is tested on supported versions of Debian, Ubuntu, CentOS, and Red Hat Enterprise Linux. (See [Percona software support life cycle]). - - Disk -- A minimum of 100 MB of storage is required for installing the PMM Client package. With a good connection to PMM Server, additional storage is not required. However, the client needs to store any collected data that it cannot dispatch immediately, so additional storage may be required if the connection is unstable or the throughput is low. VMagent uses 1 GB of disk space for cache during a network outage. QAN, on the other hand, uses RAM to store cache. + - Disk -- A minimum of 100 MB of storage is required for installing the PMM Client package. + + With a good connection to PMM Server, additional storage is not required. However, the client needs to store any collected data that it cannot dispatch immediately, so additional storage may be required if the connection is unstable or the throughput is low. VMagent uses 1 GB of disk space for cache during a network outage. QAN, on the other hand, uses RAM to store cache. ## Install diff --git a/docs/setting-up/client/postgresql.md b/docs/setting-up/client/postgresql.md index 21cef8fe5e..8ee4f27e86 100644 --- a/docs/setting-up/client/postgresql.md +++ b/docs/setting-up/client/postgresql.md @@ -245,10 +245,37 @@ If your PostgreSQL instance is configured to use TLS, click on the *Use TLS for !!! hint alert alert-success "Note" For TLS connection to work SSL needs to be configured in your PostgreSQL instance. Make sure SSL is enabled in the server configuration file `postgresql.conf`, and that hosts are allowed to connect in the client authentication configuration file `pg_hba.conf`. (See PostgreSQL documentation on [Secure TCP/IP Connections with SSL].) +### Auto-discovery limit + +PMM 2.41.0 introduces limit for **Auto-discovery** in PostgreSQL, a feature that dynamically discovers all databases in your PostgreSQL instance. + +Limiting **Auto-discovery** reduces connections and prevents high CPU and RAM usage caused by multiple databases. + +!!! caution alert alert-warning + Limiting auto-discovery may result in fewer metrics being captured from the non-primary databases. Ensure that you set the limit appropriately: + + - Setting a high limit may impact performance adversely. + - Setting a low limit might result in some missing metrics due to Auto-discovery being disabled. + +By default, **Auto-discovery** is enabled (server defined with a limit 10). + +![!](../../_images/PMM_Add_Instance_PostgreSQL_autodiscovery_enabled.png) + +When you select **Disabled**, the **Auto-discovery limit** will be set to `-1`. + +![!](../../_images/PMM_Add_Instance_PostgreSQL_autodiscovery_disabled.png) + +For a custom value, select **Custom** and enter or choose your preferred value from the **Auto-discovery limit** field. + +![!](../../_images/PMM_Add_Instance_PostgreSQL_autodiscovery_custom.png) + + ### On the command line Add the database server as a service using one of these example commands. If successful, PMM Client will print `PostgreSQL Service added` with the service's ID and name. Use the `--environment` and `-custom-labels` options to set tags for the service to help identify them. + + ### Examples Add instance with default node (`-postgresql`). @@ -309,6 +336,25 @@ where: - `USER`: Database user allowed to connect via TLS. Should match the common name (CN) used in the client certificate. - `SERVICE`: Name to give to the service within PMM. +#### Automatic discovery limit via CLI + +Starting with PMM 2.41.0, there is a new flag in `pmm-admin` to limit Auto-discovery: + +`--auto-discovery-limit=XXX` + +- If number of databases > Auto-discovery limit, then auto discovery is **OFF** +- If number of databases <= Auto-discovery limit, then auto discovery is **ON** +- If the Auto-discovery limit is not defined, it takes the default value, which is 0 (server defined with limit 10), and Auto-discovery is **ON**(if you do not have more than 10 databases). +- If Auto-discovery limit < 0 then auto discovery is **OFF**. + + +Example: + +If you set the limit to 10 and your PostgreSQL instance has 11 databases, automatic discovery will be disabled. + +`pmm-admin add postgresql --username="pmm-agent" --password="pmm-agent-password" --auto-discovery-limit=10` + + ## Check the service diff --git a/mkdocs-base.yml b/mkdocs-base.yml index 87c1309c5b..c44a7e7f3d 100644 --- a/mkdocs-base.yml +++ b/mkdocs-base.yml @@ -64,7 +64,8 @@ markdown_extensions: pymdownx.details: {} pymdownx.mark: {} pymdownx.smartsymbols: {} - pymdownx.tabbed: {} + pymdownx.tabbed: + {alternate_style: true} pymdownx.tilde: {} pymdownx.superfences: {} pymdownx.highlight: @@ -93,10 +94,9 @@ plugins: - "setting-up/client/docker.md" # https://github.com/orzih/mkdocs-with-pdf with-pdf: - output_path: "_pdf/PerconaMonitoringAndManagement-2.40.1.pdf" + output_path: "_pdf/PerconaMonitoringAndManagement-2.41.0.pdf" cover_title: "Percona Monitoring and Management Documentation" - cover_subtitle: 2.40.1 (Oct 20, 2023) - + cover_subtitle: 2.41.0 (Dec 12, 2023) author: "Percona Technical Documentation Team" cover_logo: docs/_images/Percona_Logo_Color.png custom_template_path: _resources/templates @@ -185,10 +185,12 @@ nav: - how-to/upgrade.md - how-to/secure.md - how-to/optimize.md + - how-to/HA.md - how-to/annotate.md - how-to/share-dashboard.md - how-to/extend-metrics.md - how-to/troubleshoot.md + - how-to/PMM_dump.md - Integrate with Percona Platform: - how-to/integrate-platform.md - how-to/account-info.md @@ -313,6 +315,7 @@ nav: - faq.md - Release Notes: - release-notes/index.md + - "PMM 2.41.0": release-notes/2.41.0.md - "PMM 2.40.1": release-notes/2.40.1.md - "PMM 2.40.0": release-notes/2.40.0.md - "PMM 2.39.0": release-notes/2.39.0.md diff --git a/variables.yml b/variables.yml index 6295effcd0..3b930a7683 100644 --- a/variables.yml +++ b/variables.yml @@ -1,9 +1,9 @@ # PMM Version for HTML # See also mkdocs.yml plugins.with-pdf.cover_subtitle and output_path -release: '2.40.1' -version: '2.40.1' -release_date: 2023-10-20 +release: '2.41.0' +version: '2.41.0' +release_date: 2023-12-12 # SVG icons. Use in markdown as {{icon.}} # For the Percona image icon (encoded inline SVG), see https://css-tricks.com/using-svg/ icon: