We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
如图,报错AssertionError: Torch not compiled with CUDA enabled
import torch print(torch.cuda.is_available()) // 输出False
我用的是Macbook m1 Pro 笔记本,LM Studio 使用其他模型权重正常,是否需要在base环境下去跑?还是怎样在正常环境下能运行起来?能修改一下程序去掉CUDA部分的吗?
The text was updated successfully, but these errors were encountered:
Mac端需要等后续ollama方案 也可以自己试着将模型导入ollama
我的是M1MAX 修改一下使用终端运行的代码可以用cpu跑 但是会很慢(20分钟也没蹦出一个字就关了)
Sorry, something went wrong.
ollama , lm studio都没有问题
在M1 Mac上尝试使用了MPS,监控也可以看到GPU的使用情况,但推理速度并不快 修改 deploy/web_streamlit_for_instruct_v*.py, 将其中的 v.cuda()改成 device = torch.device("mps" if torch.backends.mps.is_available() else "cpu") v.to(device) 以及注释掉 torch.cuda.empty_cache()
device = torch.device("mps" if torch.backends.mps.is_available() else "cpu")
v.to(device)
torch.cuda.empty_cache()
No branches or pull requests
如图,报错AssertionError: Torch not compiled with CUDA enabled
我用的是Macbook m1 Pro 笔记本,LM Studio 使用其他模型权重正常,是否需要在base环境下去跑?还是怎样在正常环境下能运行起来?能修改一下程序去掉CUDA部分的吗?
The text was updated successfully, but these errors were encountered: