Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

get hookedSAETransformer + tutorial working #125

Closed
wants to merge 7 commits into from

Conversation

dtch1997
Copy link
Contributor

@dtch1997 dtch1997 commented May 7, 2024

Description

Updates Joseph's HookedSAETransformer to current codebase

Fixes # (issue)

Type of change

Please delete options that are not relevant.

  • Bug fix (non-breaking change which fixes an issue)
  • New feature (non-breaking change which adds functionality)
  • Breaking change (fix or feature that would cause existing functionality to not work as expected)
  • This change requires a documentation update

Checklist:

  • I have commented my code, particularly in hard-to-understand areas
  • I have made corresponding changes to the documentation
  • My changes generate no new warnings
  • I have added tests that prove my fix is effective or that my feature works
  • New and existing unit tests pass locally with my changes
  • I have not rewritten tests relating to key interfaces which would affect backward compatibility

You have tested formatting, typing and unit tests (acceptance tests not currently in use)

  • I have run make check-ci to check format and linting. (you can run make format to format code if needed.)

Performance Check.

If you have implemented a training change, please indicate precisely how performance changes with respect to the following metrics:

  • L0
  • CE Loss
  • MSE Loss
  • Feature Dashboard Interpretability

Please links to wandb dashboards with a control and test group.

Copy link

codecov bot commented May 7, 2024

Codecov Report

Attention: Patch coverage is 33.33333% with 8 lines in your changes are missing coverage. Please review.

Project coverage is 63.90%. Comparing base (c1d9cbe) to head (87b2bb7).
Report is 1 commits behind head on main.

Files Patch % Lines
sae_lens/training/sparse_autoencoder.py 33.33% 7 Missing and 1 partial ⚠️
Additional details and impacted files
@@            Coverage Diff             @@
##             main     #125      +/-   ##
==========================================
- Coverage   64.11%   63.90%   -0.21%     
==========================================
  Files          17       17              
  Lines        1761     1773      +12     
  Branches      289      291       +2     
==========================================
+ Hits         1129     1133       +4     
- Misses        568      575       +7     
- Partials       64       65       +1     

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

@ArthurConmy
Copy link
Contributor

ArthurConmy commented May 9, 2024

Why is code being copy and pasted over from transformer lens? Is it not possible to just use transformer_lens.HookedSAETransformer: TransformerLensOrg/TransformerLens#536?

@jbloomAus
Copy link
Owner

Why is code being copy and pasted over from transformer lens? Is it not possible to just use transformer_lens.HookedSAETransformer: neelnanda-io/TransformerLens#536?

@ArthurConmy This was a tough call for me but I think it's probably the right one. T-lens implements it's own SAE which has a few dissimilarities to ours and which might change / go out of sync. Part of this was that I tried to use the T-lens one with SAE Lens and it needed enough changes that don't make sense as PR's to T-lens that I thought it might be best to duplicate / get them in sync.

However, this thinking is cached from a couple of weeks ago and my conviction is less strong now without more context. Possibly worth trying to enumerate the challenges and make changes to my SAE class of the HookedSAE transformer class such that this isn't necessary.

@ArthurConmy
Copy link
Contributor

ArthurConmy commented May 9, 2024

@jbloomAus there is definitely a problem with too many SAEs classes. I think there probably should be one training SAE (likely the SAE in this codebase) and one inference SAE (used for circuit analysis and feature viz), but currently we have

  1. SAE for training in this lib
  2. SAE for inference in TransformerLens
  3. SAE for inference (feature analysis) in sae_vis
  4. This proposal too

Which seems too many (already, the fact I had to use 3) and 2) when doing steering vectors work sucked IMO)

ETA: ping me if I should make this a new issue somewhere.

@jbloomAus
Copy link
Owner

I don't mind having the discussion here.

My thoughts are:

  1. Agree that 3 basically shouldn't exist. I'm refactoring SAE vis heavily for Neuronpedia and planning on removing it's native SAE class and having that version just depend on SAE Lens.
  2. I'd be surprised if we could avoid having at least 2 SAE classes,but plausibly we can get there (a training and inference class). I take your point, and concede it's worth a second effort to see if we can avoid the duplication (in particular, of HookedSAE Transformer). I'll look at what refactoring to have a training SAE and an inference SAE in SAE Lens and see how close we can make the latter to the T-lens version / whether we can simply rely on the tlens hookedSAE transformer.

@jbloomAus
Copy link
Owner

@dtch1997 Thanks for this! We now have hooked SAE transformer. I appreciate you doing this and sorry how it turned out with duplicate work. Will try to find a way to make it up to you.

@jbloomAus jbloomAus closed this May 29, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants