Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add bf16 autocast #126

Merged
merged 6 commits into from
May 7, 2024
Merged

Add bf16 autocast #126

merged 6 commits into from
May 7, 2024

Conversation

tomMcGrath
Copy link
Contributor

Description

Adds torch autocasting to the SAE train step, improving throughput.

Type of change

Please delete options that are not relevant.

  • Bug fix (non-breaking change which fixes an issue)
  • New feature (non-breaking change which adds functionality)
  • Breaking change (fix or feature that would cause existing functionality to not work as expected)
  • This change requires a documentation update

Checklist:

  • I have commented my code, particularly in hard-to-understand areas
  • My changes generate no new warnings
  • I have added tests that prove my fix is effective or that my feature works (see W&B dash)
  • New and existing unit tests pass locally with my changes
  • I have not rewritten tests relating to key interfaces which would affect backward compatibility

You have tested formatting, typing and unit tests (acceptance tests not currently in use)

  • I have run make check-ci to check format and linting. (you can run make format to format code if needed.)

Performance Check.

If you have implemented a training change, please indicate precisely how performance changes with respect to the following metrics:

  • L0
  • CE Loss
  • MSE Loss
  • Feature Dashboard Interpretability

W&B dashboard showing no performance regression when autocasting to bf16 and moderate speedup/perf-per-time improvement: https://wandb.ai/tmcgrath/autocast%20testing/workspace

More performance could probably be obtained by compiling and autocasting the LLM and maybe by compiling the SAE. Other hyperparameters are also probably better for performance.

@tomMcGrath
Copy link
Contributor Author

BTW I tried adding fp16 support but training performance degraded pretty badly so it seems more like a footgun and it also makes configs hard to serialise.

Copy link

codecov bot commented May 7, 2024

Codecov Report

All modified and coverable lines are covered by tests ✅

Project coverage is 64.29%. Comparing base (5f46329) to head (ba0b8a7).
Report is 2 commits behind head on main.

Additional details and impacted files
@@            Coverage Diff             @@
##             main     #126      +/-   ##
==========================================
- Coverage   64.40%   64.29%   -0.12%     
==========================================
  Files          17       17              
  Lines        1753     1770      +17     
  Branches      289      291       +2     
==========================================
+ Hits         1129     1138       +9     
- Misses        560      568       +8     
  Partials       64       64              

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

@jbloomAus
Copy link
Owner

Looks awesome! really appreciate the wandb runs!

@jbloomAus jbloomAus merged commit 8e28bfb into jbloomAus:main May 7, 2024
6 of 7 checks passed
tom-pollak pushed a commit to tom-pollak/SAELens that referenced this pull request Oct 22, 2024
* add bf16 autocast and gradient scaling

* simplify autocast setup

* remove completed TODO

* add autocast dtype selection (generally keep bf16)

* formatting fix

* remove autocast dtype
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants