-
-
Notifications
You must be signed in to change notification settings - Fork 198
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
GPUs are not visible from the container #293
Comments
Similar with #294, it seems that "CUDA" ( the software that comes with the Nvidia GPU) is not installed or detected on your PC.
And see what it prints out? +) As far as I know, CUDA should be installed on your base machine not the container, since |
Yep, correct, on my host it's working perfectly: $ nvcc --version It works perfectly on the host: $ nvidia-smi +-----------------------------------------------------------------------------------------+ |
It's workin on my host, but not from the container, I assuime it's my rig.. |
Which OS are you using?
I thought this may be an issue from my two gpu rig running containers, but.. for some reason after wiping out docker and nvidia divers and cuda, I cannot get it working and I keep getting this error, it's something like my docker in not able tor each my GPUs
[+] Running 2/2
✔ Network whisper-webui_default Created 0.1s
✔ Container whisper-webui-app-1 Created 0.1s
Attaching to app-1
app-1 | /Whisper-WebUI/venv/lib/python3.11/site-packages/torch/cuda/init.py:619: UserWarning: Can't initialize NVML
app-1 | warnings.warn("Can't initialize NVML")
app-1 | Use "faster-whisper" implementation
app-1 | Device "auto" is detected
app-1 | Running on local URL: http://0.0.0.0:7860
app-1 |
app-1 | To create a public link, set
share=True
inlaunch()
.Any thought or idea on how can I make my GPUs visible?? (they are visible in the host - nvidia-smi validated.)
The text was updated successfully, but these errors were encountered: