Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

16-bit Support and Dynamic Loss Scaling #360

Open
wants to merge 10 commits into
base: master
Choose a base branch
from

Conversation

jasonkena
Copy link

@jasonkena jasonkena commented Feb 27, 2020

This utilizes Apex's AMP, to enable 16-bit computation, increasing performance, and lowering GPU memory consumption (it saved me 1 GB with batch size 4). Unfortunately, it doesn't work with torch.jit.

The dynamic loss scaling will potentially fix #359, #340, #318, #316, #222, #186, and #56.

I also patched dcn_v2.py to support it, so that YOLACT++ will work.

I am very sorry for the awful commit history, it was because the Black code-formatter literally modified all of the code.

Cheers

@breznak
Copy link
Contributor

breznak commented May 16, 2020

Thanks @jasonkena ! The issue description is interesting, I'd especially welcome the fp16 support! But it is quite impossible to review with all the (false possitive) code changes due to your editor re-formatting. Can you avoid that and submit a PR with only your changes? (no unnecessary whitespace)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Warning: Moving average ignored a value of inf
2 participants