-
Notifications
You must be signed in to change notification settings - Fork 795
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
cuda out of memory on RTX 24gb 3090 #150
Comments
I also encountered the same problem and look forward to the author’s reply. |
Same problem. Anbody can help? |
I encountered the same error and solved. In my opinion, 'CUDA out of memory' error can be solved by just adding some arguments.(e.g. --gradient_accumulation_steps=1 --gradient_checkpointing) |
I have tested my solution on A30 (24GB GPU only): Step(1) Preparation: reduce the image size like (change the command): Step(2) Training: disable DDP in main.py like: Then, the Average Peak memory about 21589.97MB Step(3) Generation: reduce the image size like (change the command): |
@xueqinxiang has the performance been affected ? |
This is really interesting work! I tried to run it on a server with a 24gb 3090, and it runs into out of memory. Is a larger GPU needed? Please let me know what size GPU you are able to run it on.
Thanks
The text was updated successfully, but these errors were encountered: