-
-
Notifications
You must be signed in to change notification settings - Fork 198
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Issue with trying this with YouTube video, Docker, and Nvidia #311
Comments
Hi. I don't know why For now, I have just allowed custom values in #312, as the error message says. But you may have to manually enter the |
float16 requires Maxwell architecture with Compute Capability of 5.3 or above, while Tesla M40 only supports 5.2 |
faster-whisper now needs CUDA version atleast You can see CUDA version compatibility with your GPU here:
As @dng-nguyn said, it seems that M40 supports only for really old version of CUDA, and it may require to some strugglings to setup. Using just the CPU may be a better choice, although it's slower. |
I am running CUDA 12.6 as shown in my nvidia-smi screenshot from earlier. I guess if your container can't support my card, I will just not use it until I can afford a better card. |
The card is supported in CUDA 12 though (Maxwell microarchitecture), just not float16 Have you tried running it bare-metal? Or change whisper model to openai's. Since you're manually building the image this may help you. |
@NightHawkATL Ah, sorry. I misread the table. Tesla M40 supports CUDA 12.x.
I didn't notice that. Lately github doesn't allow me to open the image in the new tab and it makes it difficult to read small image. But since yours is Linux, the comment @dng-nguyn pointed out would help!
It seems that this will manually install some missing And if it's still problematic, you can try using openai's whisper implementation, you can edit Whisper-WebUI/docker-compose.yaml Line 19 in f3f351e
to
|
Investigated issue.
Since this was about version incompatibility between So the new image works fine now! If you still face the same bug, please feel free to re-open! |
Which OS are you using?
I have cloned the git and have built the image and launched the UI correctly as far as I can tel. I have it on my AI VM that I have an Nvidia M40 12GB passed through and it does say that it detected CUDA and I can see it loading the model when testing YouTube transcribing. It will error out and stop the container once it tries to create the file. I did see inanother issue where someone was having a similar issue and you suggested they change the compute type. I only have "float32" as an option for my setup. As it is on the same VM as my Ollama and InvokeAI setups, those have access to the GPUs and are currently not in use.
here is my compose:
The text was updated successfully, but these errors were encountered: