Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add logic to train JumpReLU SAEs #352

Merged

Conversation

anthonyduong9
Copy link
Contributor

@anthonyduong9 anthonyduong9 commented Oct 30, 2024

Description

Adds logic to train JumpReLU SAEs. JumpReLU is a state-of-the-art architecture, so users should be able to train JumpReLU SAEs. Currently, they can load pre-trained JumpReLU SAEs and perform inference with them.

Fixes #330

Type of change

Please delete options that are not relevant.

  • Bug fix (non-breaking change which fixes an issue)
  • New feature (non-breaking change which adds functionality)

Checklist:

  • I have commented my code, particularly in hard-to-understand areas
  • I have made corresponding changes to the documentation
  • My changes generate no new warnings
  • I have added tests that prove my fix is effective or that my feature works
  • New and existing unit tests pass locally with my changes
  • I have not rewritten tests relating to key interfaces which would affect backward compatibility

You have tested formatting, typing and unit tests (acceptance tests not currently in use)

  • I have run make check-ci to check format and linting. (you can run make format to format code if needed.)

Performance Check.

If you have implemented a training change, please indicate precisely how performance changes with respect to the following metrics:

  • L0
  • CE Loss
  • MSE Loss
  • Feature Dashboard Interpretability

Please links to wandb dashboards with a control and test group.

@anthonyduong9 anthonyduong9 marked this pull request as ready for review October 30, 2024 08:09
@anthonyduong9 anthonyduong9 changed the title Add logic to train jump re lu sa es Add logic to train JumpReLU SAEs Oct 30, 2024
Copy link
Collaborator

@chanind chanind left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Some minor comments but core logic looks great! I'll try training a JumpReLU SAE to make sure this works.

It feels odd that the l1_coefficient is used for the JumpReLU sparsity loss coefficient, and that it gets logged as l1_loss. Maybe in a follow-up PR we should rename l1_coefficient to sparsity_coefficient instead? I added a PR #357 which lets the forward pass log any losses it wants, so if that gets merged, it would probably make sense to call the loss sparsity_loss since there's not technically any L1 loss in jumprelu.

@@ -162,6 +163,7 @@ class LanguageModelSAERunnerConfig:
seed: int = 42
dtype: str = "float32" # type: ignore #
prepend_bos: bool = True
threshold: float = 0.001
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: I think this would be clearer as a jumprelu_init_threshold or something to make it clear this is only used for initialization

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this also should have jumprelu_bandwidth as a param as well, currently it seems hardcoded`

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Oops, thanks for catching that bandwidth wasn't configurable. I've changed the first name's field and added the second field.

if self.cfg.architecture == "jumprelu":
threshold = torch.exp(self.log_threshold).detach()
del state_dict["log_threshold"]
state_dict["threshold"] = threshold
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

👍

def backward(ctx: Any, grad_output: torch.Tensor) -> tuple[torch.Tensor, torch.Tensor, None]: # type: ignore[override]
x, threshold = ctx.saved_tensors
bandwidth = ctx.bandwidth
x_grad = 0.0 * grad_output # We don't apply STE to x input
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think it's fine to just return None for the x_grad rather than multiplying by 0. I know this is just from the example code

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Oh cool, I've made the change.

@anthonyduong9
Copy link
Contributor Author

Some minor comments but core logic looks great! I'll try training a JumpReLU SAE to make sure this works.

It feels odd that the l1_coefficient is used for the JumpReLU sparsity loss coefficient, and that it gets logged as l1_loss. Maybe in a follow-up PR we should rename l1_coefficient to sparsity_coefficient instead? I added a PR #357 which lets the forward pass log any losses it wants, so if that gets merged, it would probably make sense to call the loss sparsity_loss since there's not technically any L1 loss in jumprelu.

Thanks! Let me know if there's anything else you need from me.

And I'm guessing unless you think we should keep the l1_coefficient field for the other architecture(s), and add a sparsity_coefficient that's just used for JumpReLU, your suggestion makes sense.

@chanind
Copy link
Collaborator

chanind commented Nov 3, 2024

Trying a test-run now, but one more thing is that the typing for architecture for LanguageModelSAERunnerConfig needs to be updated as well - it's currently set to "standard" | "gated" so throws a typing error when setting "jumprelu".

@chanind
Copy link
Collaborator

chanind commented Nov 3, 2024

It seems like the general paradigm of most of the recent SAE architectures is having some sort of sparsity-inducing loss, either l1 in normal SAEs or l0 in jumprelu (or nothing for topk), and then an optional auxiliary loss. I'd be for just calling it sparsity_coefficient to handle the architecture differences, but should be discussed in another follow-up issue probably.

@chanind
Copy link
Collaborator

chanind commented Nov 3, 2024

Screenshot 2024-11-03 at 21 11 48

I tried training for 1B tokens with sparsity coeff of 1e-3 (which I think is reasonable based on the gemma scope paper). Looks like L0 is still coming down by the end of training (Gemma Scope trains on 4B tokens), so I'm assuming this would plateau at a reasonable loss if I kept going.

Copy link
Collaborator

@chanind chanind left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM! Great work with this!

@chanind chanind merged commit 0b56d03 into jbloomAus:main Nov 3, 2024
5 checks passed
@anthonyduong9
Copy link
Contributor Author

LGTM! Great work with this!

Thanks!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

[Proposal] Support training JumpReLU SAEs
2 participants