WIP: This codebase is under active development
Because language models are trained to predict the next token in naturally occurring text, they often reproduce common human errors and misconceptions, even when they "know better" in some sense. More worryingly, when models are trained to generate text that's rated highly by humans, they may learn to output false statements that human evaluators can't detect. We aim to circumvent this issue by directly **eliciting latent knowledge ** (ELK) inside the activations of a language model.
Specifically, we're building on the Contrastive Representation Clustering (CRC) method described in the paper Discovering Latent Knowledge in Language Models Without Supervision by Burns et al. (2022). In CRC, we search for features in the hidden states of a language model which satisfy certain logical consistency requirements. It turns out that these features are often useful for question-answering and text classification tasks, even though the features are trained without labels.
Our code is based on PyTorch and Huggingface Transformers. We test the code on Python 3.10 and 3.11.
First install the package with pip install -e .
in the root directory. Use pip install -e .[dev]
if you'd like to contribute to the project (see Development section below). This should install all the necessary dependencies.
To fit reporters for the HuggingFace model model
and dataset dataset
, just run:
ccs elicit microsoft/deberta-v2-xxlarge-mnli imdb
This will automatically download the model and dataset, run the model and extract the relevant representations if they
aren't cached on disk, fit reporters on them, and save the reporter checkpoints to the ccs-reporters
folder in your
home directory. It will also evaluate the reporter classification performance on a held out test set and save it to a
CSV file in the same folder.
The following will generate a CCS (Contrast Consistent Search) reporter instead of the CRC-based reporter, which is the default.
ccs elicit microsoft/deberta-v2-xxlarge-mnli imdb --net ccs
The following command will evaluate the probe from the run naughty-northcutt on the hidden states extracted from the
model deberta-v2-xxlarge-mnli for the imdb dataset. It will result in an eval.csv
and cfg.yaml
file, which are
stored under a subfolder in ccs-reporters/naughty-northcutt/transfer_eval
.
ccs eval naughty-northcutt microsoft/deberta-v2-xxlarge-mnli imdb
The following runs elicit
on the Cartesian product of the listed models and datasets, storing it in a special folder
CCS_DIR/sweeps/<memorable_name>. Moreover, --add_pooled
adds an additional dataset that pools all of the datasets
together. You can also add a --visualize
flag to visualize the results of the sweep.
ccs sweep --models gpt2-{medium,large,xl} --datasets imdb amazon_polarity --add_pooled
If you just do ccs plot
, it will plot the results from the most recent sweep.
If you want to plot a specific sweep, you can do so with:
ccs plot {sweep_name}
The hidden states resulting from ccs elicit
are cached as a HuggingFace dataset to avoid having to recompute them
every time we want to train a probe. The cache is stored in the same place as all other HuggingFace datasets, which is
usually ~/.cache/huggingface/datasets
.
Use pip install pre-commit && pre-commit install
in the root folder before your first commit.
pytest
We use pyright, which is built into the VSCode editor. If you'd like to run it as a standalone tool, it requires a nodejs installation.
pyright
We use ruff. It is installed as a pre-commit hook, so you don't have to run it manually. If you want to run it manually, you can do so with:
ruff . --fix
If you work on a new feature / fix or some other code task, make sure to create an issue and assign it to yourself ( Maybe, even share it in the elk channel of Eleuther's Discord with a small note). In this way, others know you are working on the issue and people won't do the same thing twice 👍 Also others can contact you easily.