ValueError: torch.cuda.is_available() should be True but is False #2253
-
I always got this error when I start to train Lora.
My host PC has python 3.10.11 from pyenv global setting. And venv python is the same version. And I have installed cuda 11.8 and 12.4, I have configured 11.8 to PATH, and make sure it is the default one. I try both host python and venv python, start python and enter these code:
The output is
The output is
But when I start train, the script stoped at the same error.
Or how can I specify the script to use 4090 only? I appreciate if someone can help. |
Beta Was this translation helpful? Give feedback.
Replies: 2 comments 1 reply
-
I try to reinstall all venv environment, and still got same issue. But I try the last command directly in cmd, it is working good.
And if I put this command directly in cmd.
It is working.
But still not working when I click |
Beta Was this translation helpful? Give feedback.
-
Beta Was this translation helpful? Give feedback.
Check your taskmaster and see which GPU number your 4090 has
You see the ID in "Physical location:" in my case it is 0 since I only have one GPU enabled. Specify this number in the acceleration setup.
Also in the GUI you can force override the accelerate setup to choose a specific GPU to use.
Don't worry about screwing up the accelerate config, you can redo it if need be.