Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

DCGM exporter doesn't work on the latest version of Bottlerocket AMI #34

Open
peter-volkov opened this issue May 8, 2024 · 11 comments
Open
Labels
bug Something isn't working

Comments

@peter-volkov
Copy link

I'm not sure what is the correct place to report this, please direct me if this is not the correct place.

Goal:
I want to have EKS cluster with working observability, Bottlerocket AMI and GPU-nodes (g5* instances)
I use this helm chart by enabling amazon-cloudwatch-observability EKS add-on for my cluster.

Steps to reproduce:

  1. I create latest version of EKS, GPU nodes with the last version of Bottlerocket AMI.
  2. I use the latest nvidia-device-plugin ( https://github.com/NVIDIA/k8s-device-plugin/releases/tag/v0.15.0 )
  3. I enable the latest version of the amazon-cloudwatch-observability EKS add-on (dcgm-exporter image 602401143452.dkr.ecr.us-east-2.amazonaws.com/eks/observability/dcgm-exporter:3.3.3-3.3.1-ubuntu22.04 is used )
  4. All related daemonsets except for dcgm-exporter work well.
  5. DCGM-exporter containers has this in output:
time="2024-05-08T11:22:02Z" level=info msg="Starting dcgm-exporter"
Error: Failed to initialize NVML
time="2024-05-08T11:22:02Z" level=fatal msg="Error starting nv-hostengine: DCGM initialization error"

I guess this is some version incompatibility issue for the DCGM and nvidia driver (being installed to nodes via k8s-device-plugin ).
What should I do to make DCGM exporter work?

@mitali-salvi
Copy link
Contributor

Hey @peter-volkov,
could you provide the full Bottlerocket AMI so that we re-create this issue on our end ?

@peter-volkov
Copy link
Author

peter-volkov commented May 21, 2024

I appreciate your help.
I'm just creating a node group with ami_type = "BOTTLEROCKET_x86_64_NVIDIA"
specified in Terraform config. It takes the latest version of the image family during the initial creation.
Currently I have release_version = "1.19.5-64049ba8"
imageId=ami-0f3f964e4f939bbd0

But I do not really care about the version. If you can successfully run DCGM export as a part of amazon-cloudwatch-observability EKS add-on with g5.xlarge on any BottleRocket image -- It will be enough for me. Then I will consider the issue to be my own problem and will debug it myself

@lfpalacios
Copy link

I'm experiencing this as well. I use the official bottlerocket-nvidia AMI ami-02ce823b770755757 bottlerocket-aws-k8s-1.30-nvidia-x86_64-v1.20.2-536d69d0 in my EKS 1.30 cluster, for GPU workloads.

The dcgm-exporter enters a CrashLoopBackOff state, with the following logs:

Warning #2: dcgm-exporter doesn't have sufficient privileges to expose profiling metrics. To get profiling metrics with dcgm-exporter, use --cap-add SYS_ADMIN
time="2024-07-11T18:15:00Z" level=info msg="Starting dcgm-exporter"
Error: Failed to initialize NVML
time="2024-07-11T18:15:00Z" level=fatal msg="Error starting nv-hostengine: DCGM initialization error"

I tried to manually setup the capability under aws-observability helm chart values.yaml, but it seems that the parameter is ignored or doesn't exist:

dcgmExporter:
  ...
  securityContext:
    capabilities:
      add: ["SYS_ADMIN"]

I tried to make dcgm-exporter work in many ways, by using the official NVIDIA helm charts, also with DataDog Integration, it all fail when dcgm-exporter try to startup on Kubernetes.

@lisguo
Copy link
Contributor

lisguo commented Jul 12, 2024

We are aware of the issue and are looking at potential solutions. Will keep you posted on a path forward.

@lisguo lisguo added the bug Something isn't working label Jul 12, 2024
@dbcelm
Copy link

dbcelm commented Aug 13, 2024

Facing this issue as well with BottleRocketOS AMI GPU nodes, I don't see a direct way though to disable DCGM exporter from helm chart, can we include conditional variable to disable DCGM Exporter within helm until this is fixed? As of now only way seem to be to delete CRD resource of it

@movence
Copy link
Contributor

movence commented Aug 19, 2024

@dbcelm You can try updating the agent configuration to disable accelerated hardware monitoring which should disable DCGM Exporter in your cluster.
The following configuration will disable DCGM Exporter while keeping Enhanced Container Insights feature activated:

{
  "logs": {
    "metrics_collected": {
      "kubernetes": {
        "enhanced_container_insights": true,
        "accelerated_compute_metrics": false
      }
    }
  }
}

For more details, please check the doc.

Please note that disabling GPU monitoring with accelerated_compute_metrics flag in the agent configuration will disable both DCGM Exporter (NVIDIA) and Neuron Monitor (Inferentia & Trainium).

@dbcelm
Copy link

dbcelm commented Aug 22, 2024

@movence even after this change, I don't see GPU metrics widgets on Container Insights Dashboard

@movence
Copy link
Contributor

movence commented Aug 26, 2024

I don't see GPU metrics widgets on Container Insights Dashboard

@dbcelm Are you looking for an option to disable GPU monitoring or to disable DCGM Exporter only for your cluster?

GPU widgets in Container Insights Dashboard are displayed conditionally when there are GPU metrics to generate the widgets with. If you followed the instruction above to disable GPU monitoring, there will be no GPU metrics since the flag will disable BOTH DCGM Exporter AND Neuron Monitor.

@dbcelm
Copy link

dbcelm commented Aug 26, 2024

@movence I have installed DCGM Exporter helm chart separately on this cluster

@movence
Copy link
Contributor

movence commented Aug 29, 2024

@dbcelm thanks for the information. Here are some possible reasons:

  • DCGM Exporter that was spawn up separately does not have required label (k8s-app=dcgm-exporter-service) which is used by CWAgent to look up DCGM Exporter using service discovery mechanism within a node.
  • CWAgent uses TLS to communicate with DCGM Exporter with certs generated by Helm charts. These certs are added to both CWAgent and DCGM Exporter as secrets using volume mounts. Those DCGM Exporters not managed by helm charts might be missing TLS setup.

@dbcelm
Copy link

dbcelm commented Sep 10, 2024

@movence I have deployed through Add-On and now DCGM exporter pods are working. But I still don't see GPU matrices on Dashboard. I suspect could it be due to the fact that DCGM Exporter is not running in "hostNetwork: true" mode as I use Cilium CNI.

Is there a way I can configure "hostNetwork: true" when deploying through Add-On to test this?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

6 participants