-
Notifications
You must be signed in to change notification settings - Fork 45
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Out of memory error #21
Comments
Possibly you can choose pytorch 1.6 and adopt mixed-precision training for memory-efficient training. |
16G is ok, and 32G is better with a larger batch size |
@panzhang0212 thank you. I took a typo in my comment. I implemented GPU2070 super not 2060super and its memory is 8G. |
I have a same problem which is out of memory error , I use GPU2080Ti and its memory is 10GB, though I change batch_size to 2, out of memory also happened . |
@mlxht990720 Did you try with batch_size = 1 using 2080Ti? |
Yes. But it's still out of memory. Maybe I need to use more gpus.🥲
发自我的iPhone
…------------------ Original ------------------
From: myway0101 ***@***.***>
Date: Fri,Apr 9,2021 3:37 PM
To: microsoft/CoCosNet ***@***.***>
Cc: mlxht990720 ***@***.***>, Mention ***@***.***>
Subject: Re: [microsoft/CoCosNet] Out of memory error (#21)
|
Any idea to decrease memory cost in training? |
You can try to use mixed precision training and gradient checkpointing. Further, you can try https://github.com/facebookresearch/fairscale |
What is the minimum capacity of memory for training with the batch_size=1?
I implemented a 2060super GPU on my computer, but this error happened.
The text was updated successfully, but these errors were encountered: