-
Notifications
You must be signed in to change notification settings - Fork 123
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add logic to train JumpReLU SAEs #352
Add logic to train JumpReLU SAEs #352
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Some minor comments but core logic looks great! I'll try training a JumpReLU SAE to make sure this works.
It feels odd that the l1_coefficient
is used for the JumpReLU sparsity loss coefficient, and that it gets logged as l1_loss
. Maybe in a follow-up PR we should rename l1_coefficient
to sparsity_coefficient
instead? I added a PR #357 which lets the forward pass log any losses it wants, so if that gets merged, it would probably make sense to call the loss sparsity_loss
since there's not technically any L1 loss in jumprelu.
sae_lens/config.py
Outdated
@@ -162,6 +163,7 @@ class LanguageModelSAERunnerConfig: | |||
seed: int = 42 | |||
dtype: str = "float32" # type: ignore # | |||
prepend_bos: bool = True | |||
threshold: float = 0.001 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: I think this would be clearer as a jumprelu_init_threshold
or something to make it clear this is only used for initialization
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this also should have jumprelu_bandwidth
as a param as well, currently it seems hardcoded`
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Oops, thanks for catching that bandwidth wasn't configurable. I've changed the first name's field and added the second field.
if self.cfg.architecture == "jumprelu": | ||
threshold = torch.exp(self.log_threshold).detach() | ||
del state_dict["log_threshold"] | ||
state_dict["threshold"] = threshold |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
👍
sae_lens/training/training_sae.py
Outdated
def backward(ctx: Any, grad_output: torch.Tensor) -> tuple[torch.Tensor, torch.Tensor, None]: # type: ignore[override] | ||
x, threshold = ctx.saved_tensors | ||
bandwidth = ctx.bandwidth | ||
x_grad = 0.0 * grad_output # We don't apply STE to x input |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think it's fine to just return None
for the x_grad
rather than multiplying by 0. I know this is just from the example code
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Oh cool, I've made the change.
Thanks! Let me know if there's anything else you need from me. And I'm guessing unless you think we should keep the |
Trying a test-run now, but one more thing is that the typing for |
It seems like the general paradigm of most of the recent SAE architectures is having some sort of sparsity-inducing loss, either l1 in normal SAEs or l0 in jumprelu (or nothing for topk), and then an optional auxiliary loss. I'd be for just calling it |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM! Great work with this!
Thanks! |
Description
Adds logic to train JumpReLU SAEs. JumpReLU is a state-of-the-art architecture, so users should be able to train JumpReLU SAEs. Currently, they can load pre-trained JumpReLU SAEs and perform inference with them.
Fixes #330
Type of change
Please delete options that are not relevant.
Checklist:
You have tested formatting, typing and unit tests (acceptance tests not currently in use)
make check-ci
to check format and linting. (you can runmake format
to format code if needed.)Performance Check.
If you have implemented a training change, please indicate precisely how performance changes with respect to the following metrics:
Please links to wandb dashboards with a control and test group.