You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Funnily enough I also added the option to run it on another GPU. When I do choose cuda:1, though, I get 2GB allocated on cuda:0 although that device is not specified anywhere in generate.py. Combined with disabling ECC (nvidia-smi -i 1 -e 0) this is Fine, because I can get over 912KibiPixels (1280x720 or 1488x624), but it would be good to understand what, why and how.
The text was updated successfully, but these errors were encountered:
Funnily enough I also added the option to run it on another GPU. When I do choose cuda:1, though, I get 2GB allocated on cuda:0 although that device is not specified anywhere in generate.py. Combined with disabling ECC (nvidia-smi -i 1 -e 0) this is Fine, because I can get over 912KibiPixels (1280x720 or 1488x624), but it would be good to understand what, why and how.
When reading up on torch's dataparallel I saw several mention a similar issue. May want to start there.
I have a cut of this code from a week or two ago.
Funnily enough I also added the option to run it on another GPU. When I do choose cuda:1, though, I get 2GB allocated on cuda:0 although that device is not specified anywhere in generate.py. Combined with disabling ECC (
nvidia-smi -i 1 -e 0
) this is Fine, because I can get over 912KibiPixels (1280x720 or 1488x624), but it would be good to understand what, why and how.The text was updated successfully, but these errors were encountered: