Is there a way to use multi GPU for more VRAM? #250
Replies: 6 comments 9 replies
-
Multi-GPU would be something really nice to have, we could for example load some of the models on one GPU and the rest on another GPU, something that comes to mind is loading GFPGAN and RealESRGAN both in one or each on a different GPUs to speed up those parts of the generation while the main model is on the main GPU. I'm giving this example because in my case I have three GPUs on my computer, two of them are GTX 750 Ti with 4 GB of VRAM each while the main one is an RTX 3059 with 8 GB of VRAM. I normally use the RTX for everything as it has more VRAM but the other GPUs could still be used for other stuff, in my case if we could split the models into each GPU it would speed up things a lot because after the main GPU generates the image the other GPUs would then upscale and fix the faces in parallel and the main GPU would continue to generate images without stopping to do anything else, that would be one way to implement that. Of course not everyone has three GPUs on their computer, with two GPUs doing the same process would speed things up, it could also be used to do everything using all the GPUs at the same time and that would also speed things up a lot. |
Beta Was this translation helpful? Give feedback.
-
Yeah, that would be awesome, in my case, my second card is a 1660 TI that gives full green images unless the -- full precision parameter is used which also increments vram consumption. But if I could use the vram it has and add it to the first card, or like you mention, load specific models into it. Would make creating bigger and more detailed images, easily without the need to go for a single bigger card. |
Beta Was this translation helpful? Give feedback.
-
Combining vram to be able to generate a larger image? not right now, it might be possible, it might not If you want to just use a different gpu, that's already implemented with the command line option --gpu If you'd like to generate on all your gpus, there's a workaround discussed here. |
Beta Was this translation helpful? Give feedback.
-
if you have wanted to use esrgan and gfpgan on different gpus please try this branch and report back#264 |
Beta Was this translation helpful? Give feedback.
-
I have 4 to 8 GPUs with 16GB VRAM each. I would like to run one instance of stablediffusion that spans across all the cards for combined VRAM, as OP mentioned. Will this be possible? I have tried HuggingFace Accelerate, but that did not span multiple GPUs. |
Beta Was this translation helpful? Give feedback.
-
@hlky there has been successful work to use multiple GPUs with a single instance of the web UI: This seems to be the branch: https://github.com/NickLucche/stable-diffusion-nvidia-docker/tree/dp Would you be able to integrate such working multi-GPU capabilities into this |
Beta Was this translation helpful? Give feedback.
-
Would there be a way to be able to use more than one GPU to have more total VRAM available?
Beta Was this translation helpful? Give feedback.
All reactions