Releases: inseq-team/inseq
v0.6.0: Context Attribution CLI, New Attribution Methods, Performance Improvements and more
🔙 Context Attribution CLI (#237)
The inseq attribute-context
CLI command was added to support the PECoRe framework for analyzing context usage in generative language models. The command is highly customizable, allowing users to pick custom contrastive step functions to detect context sensitivity during generation (CTI step) and any attribution method to attribute context reliance (CCI step).
A demo using the Inseq API is available on Hugging Face Spaces. The demo supports flexible parametrization, and the equivalent Python/Bash code can be generated by clicking on the Show code
button.
Example
The following example uses a GPT-2 model to generate a continuation of input_current_text
, and uses the additional context provided by input_context_text
to estimate its influence on the the generation. In this case, the output "to the hospital. He said he was fine"
is produced, and the generation of token hospital
is found to be dependent on context token sick
according to the contrast_prob_diff
step function.
inseq attribute-context \
--model_name_or_path gpt2 \
--input_context_text "George was sick yesterday." \
--input_current_text "His colleagues asked him to come" \
--attributed_fn "contrast_prob_diff"
Result:
Context with [contextual cues] (std λ=1.00) followed by output sentence with {context-sensitive target spans} (std λ=1.00)
(CTI = "kl_divergence", CCI = "saliency" w/ "contrast_prob_diff" target)
Input context: George was sick yesterday.
Input current: His colleagues asked him to come
Output current: to the hospital. He said he was fine
#1.
Generated output (CTI > 0.428): to the {hospital}(0.548). He said he was fine
Input context (CCI > 0.460): George was [sick](0.516) yesterday.
🔍 New Attribution Methods: Value Zeroing and ReAGent (#173, #250)
The following two perturbation-based attribution methods were added:
-
value_zeroing
: Quantifying Context Mixing in Transformers (Mohebbi et al. 2023) -
reagent
: ReAGent: A Model-agnostic Feature Attribution Method for Generative Language Models (Zhao et al., 2024)
Value zeroing is a Transformers-specific method that quantifies the layer-by-layer mixing of contextual information across token representations by zeroing the value vector associated to a specific input embedding (effectively preventing information mixing for a token position) and measuring the dissimilarity of resulting representations with respect to the original model output. The Inseq implementation is highly flexible, supporting the zeroing of specific attention heads in specific layers, allowing fine-grained control of the zeroing process. Its effect is equivalent to the Attention Knockout method proposed in Geva et al. (2023) (zeroing the value vector instead its associated attention weight).
The following example performs value zeroing on the cross-attention operation of an encoder-decoder translation model, keeping the value vectors of the self-attention operation in the encoder and the decoder modules unaltered. Only the output of the fourth layer is shown.
import inseq
model = inseq.load_model("Helsinki-NLP/opus-mt-en-fr", "value_zeroing")
out = model.attribute(
"A generative language models interpretability tool.",
encoder_zeroed_units_indices={},
decoder_zeroed_units_indices={},
)
out.show(select_idx=4)
ReAGent is a model-agnostic method that quantifies the importance of input features by measuring the change in model output in a recursive process replacing salient input tokens with plausible alternatives produced by a language model. The method is particularly useful to avoid the out-of-distribution issues of regular occlusion approaches using 0-valued vectors as replacements.
The following example uses the ReAGent method to attribute the generation of a GPT-2 decoder-only LM.
import inseq
model = inseq.load_model(
"gpt2-medium",
"reagent",
keep_top_n=5,
stopping_condition_top_k=3,
replacing_ratio=0.3,
max_probe_steps=3000,
num_probes=8
)
out = model.attribute("Super Mario Land is a game that developed by")
out.show()
🚀 Improved Performance for Single-step Attribution Methods and Multi-GPU support (#173, #238)
-
The
value_zeroing
andattention
methods now use scores from the last generation step to produce outputs more efficiently (is_final_step_method = True
). This change allows the methods to avoid iterating over the full sequence, making them more efficient for single-step attribution methods. -
Inseq now supports multi-GPU attribution for all models and methods, allowing users to distribute the attribution process across multiple GPUs. The feature is particularly useful for large models and long sequences, where the attribution process can be computationally expensive.
💥 Breaking Changes
-
If
attention
is used as attribution method inmodel.attribute
,step_scores
cannot be extracted at the same time since the method does not require iterating over the full sequence anymore. (#173) As an alternative, step scores can be extracted separately using thedummy
attribution method (i.e. no attribution). -
BOS is always included in target-side attribution and generated sequences if present. (#173)
All Merged PRs
🚀 Features
- Support for multi-GPU attribution (#238) @gsarti
- Added
inseq attribute-context
CLI command to support the [PECoRe framework] for detecting and attributing context reliance in generative LMs (#237) @gsarti - Added
value_zeroing
(inseq.attr.feat.perturbation_attribution.ValueZeroingAttribution
) attribution method andis_final_step_method = True
support (#173) @gsarti - Added
reagent
(inseq.attr.feat.perturbation_attribution.ReAgentAttribution
) attribution method (#250) @casszhao @xuan25 @gsarti
🔧 Fixes & Refactoring
- Fix URL to arXiv (#259) @bbjoverbeek
- Fix
ContiguousSpanAggregator
andSubwordAggregator
edge case of single-step generation (#247) @gsarti - Move tensors to CPU right away in the forward pass to avoid OOM when cloning (#245) @gsarti
- Fix
remap_from_filtered
behavior on sequence_scores tensors. (#245) @gsarti - Use torch-native padding when converting lists of
FeatureAttributionStepOutput
toFeatureAttributionSequenceOutput
inget_sequences_from_batched_steps
. (#245) @gsarti - Bump
ruff
version (#245) @gsarti - Drop
poetry
in favor ofuv
to accelerate package installation and simplify config inpyproject.toml
. (#249) @gsarti - Drop
darglint
in favor ofpydoclint
. (#249) @gsarti - Replace Arxiv with ACL Anthology badge in
README
. (#249) @gsarti - Add first version of
CHANGELOG.md
(#249) @gsarti - Added multithread support for running tests using
pytest-xdist
@gsarti
📝 Documentation and Tutorials
- No changes
👥 List of contributors
v0.5.0: Tutorial, Better contrastive attribution, 4-bit/Petals support and more
📄 New Tutorial and Better Documentation
-
A new quickstart tutorial is available in the repository, introducing feature attribution methods and showcasing basic and more advanced Inseq use-cases.
-
Documentation now uses the Sphinx
furo
theme -
A new utility function
inseq.explain
was introduced to visualised docs associated with string identifiers used for attribution methods, step functions and aggregators
import inseq
inseq.explain("saliency")
>>> Saliency attribution method.
Reference implementation:
`https://captum.ai/api/saliency.html <https://captum.ai/api/saliency.html>`__.
🔀 More Flexible and Intuitive Contrastive Attribution (#193, #195, #207, #228)
-
Contrastive attribution functions now support original and contrastive targets of different lengths, using right-side alignment of tokens by default to simplify usage for studies using preceding context as contrastive option.
-
Contrastive source and target inputs can be specified as string inputs to
model.attribute
when using a contrastive step function of attribution target using thecontrast_sources
andcontrast_targets
arguments (see docs) -
Custom alignments can be provided for contrastive step functions to compare specific step pairs using the
contrast_targets_alignments
argument inmodel.attribute
. Using”auto”
uses a multilingual LaBSE encoder for creating alignments using the AWESOME approach (useful for generation tasks preserving semantic equivalence, e.g. machine translation) -
The
is_attributed_fn
argument inStepFunctionBaseArgs
can be used to customize the behavior of step functions in the attributed or the regular cases.
Refer to the quickstart tutorial for examples of contrastive attribution.
🤗 Support for Distributed and 4-bit Models (#186, #205)
Towards the goal of democratizing the access to interpretability methods for analyzing state-of-the-art models, Inseq now supports attribution of distributed language models from the Petals library and 4-bit quantized LMs from the transformers
bitsandbytes
integration using load_in_4bit=True
, with the added flexibility of the Inseq API.
Example of contrastive gradient attribution of a distributed LLaMA 65B model:
from petals import AutoDistributedModelForCausalLM
model_name = "enoch/llama-65b-hf"
model = AutoDistributedModelForCausalLM.from_pretrained(model_name).cuda()
inseq_model = inseq.load_model(model, "saliency")
prompt = (
"Option 1: Take a 50 minute bus, then a half hour train, and finally a 10 minute bike ride.\n"
"Option 2: Take a 10 minute bus, then an hour train, and finally a 30 minute bike ride.\n"
"Which of the options above is faster to get to work?\n"
"Answer: Option”
)
out = inseq_model.attribute(
prompt,
prompt + "1",
attributed_fn="contrast_prob_diff",
contrast_targets=prompt + "2",
)
Refer to the doc guide for more details.
🔍 New Step Functions and Attribution Methods (#182, #222, #223)
The following step functions were added as pre-registered in this release:
logits
: Logits of the target token.contrast_logits
/contrast_prob
: Logits/probabilities of the target token when different contrastive inputs are provided to the model. Equivalent tologits
/probability
when no contrastive inputs are provided.pcxmi
: Point-wise Contextual Cross-Mutual Information (P-CXMI) for the target token given original and contrastive contexts (Yin et al. 2021).kl_divergence
: KL divergence of the predictive distribution given original and contrastive contexts. Can be restricted to most likely target token options using thetop_k
andtop_p
parameters.in_context_pvi
: In-context Pointwise V-usable Information (PVI) to measure the amount of contextual information used in model predictions (Lu et al. 2023).top_p_size
: The number of tokens with cumulative probability greater thantop_p
in the predictive distribution of the model.
The following attribution method was also added:
sequential_integrated_gradients
: Sequential Integrated Gradients: a simple but effective method for explaining language models (Enguehard, 2023)
💥 Breaking Changes
-
The
contrast_ids
andcontrast_attention_mask
parameters inmodel.attribute
for contrastive step functions and attribution targets are deprecated in favor ofcontrast_sources
andcontrast_targets
. -
Extraction and aggregation of attention weights from the
attention
method is now handled post-hoc via Aggregator classes, making it uniform to the API adopted for other attribution methods.
All Merged PRs
🚀 Features
- Attributed behavior for contrastive step functions (#228) @gsarti
- Step functions fixes, add
in_context_pvi
(#223) @gsarti - Add Sequential IG method (#222) @gsarti
- Allow contrastive attribution with shorter contrastive targets (#207) @gsarti
- Add
top_p_size
step fn,StepFunctionArgs
class (#206) @gsarti - Support
petals
distributed model classes (#205) @gsarti - Custom alignment of
contrast_targets
for contrastive attribution methods (#195) @gsarti - Tokens diff view for contrastive attribution methods (#193) @gsarti
- Handle .to for 4bit quantized models (#186) @g8a9
- Aggregation functions, named aggregators, contrastive context step functions,
inseq.explain
(#182) @gsarti - Target prefix-constrained generation (#172) @gsarti
🔧 Fixes & Refactoring
- Bump dependencies, update version and readme (#236) @gsarti
- Add optional jax group to enforce compatible jaxlib version. (#235) @carschno
- Minor fixes (#233) @gsarti
- Migrate from torchtyping to jaxtyping (#226) @carschno
- Fix command for installing pre-commit hooks. (#229) @carschno
- Remove
max_input_length
frommodel.encode
(#227) @gsarti - Migrate to
ruff format
(#225) @gsarti - Remove contrast_target_prefixes from contrastive step functions (#224) @gsarti
- Fix LIME and Occlusion outputs (#220) @gsarti
- Add model config (#216) @gsarti
- Fix tokenization space cleanup (#215) @gsarti
- Support
ContiguousSpanAggregation
whenattr_pos_start != 0
(#213) @gsarti - fix
merge_attributions
(#210) @DanielSc4 - Fix attribution remapping for decoder-only models (#204) @gsarti
- Remove forced seed in attribution (#199) @gsarti
- Fix
get_scores_dict
for duplicate tokens (#192) @gsarti - Fix
get_scores_dicts
for non-initialattr_pos_start
(#187) @gsarti - Fix batching in generate (#184) @gsarti
- Generalize forward pass management with
InputFormatter
classes (#180) @gsarti - replaced type definitions for
PreTrainedTokenizer
withPreTrainedTokenizerBase
(#179) @lsickert
📝 Documentation and Tutorials
- Update tutorial to contrastive attribution changes (#231) @gsarti
- Improved quickstart documentation (#201) @gsarti
- Add example tutorial (#196) @gsarti
- Fix Locate GPT-2 Knowledge tutorial in docs (#174) @gsarti
- Minor fixes to links and docs (#171) @gsarti
- Add
tuned-lens
integration tutorial to docs (#169) @gsarti - Migrate docs to
furo
(#168) @gsarti
👥 List of contributors
@gsarti, @DanielSc4, @carschno, @g8a9 and @lsickert
v0.4.0: Perturbation-based methods, Int8 backward attribution, contrastive step function and more
What’s Changed
Perturbation-based Attribution Methods (#145)
Thanks to @nfelnlp, this version introduces the PerturbationAttributionRegistry
and the OcclusionAttribution
(occlusion
) and LimeAttribution
(lime
) methods, both adapted from Captum's original implementations.
-
Our implementation of Occlusion (Zeiler and Fergus, 2014 estimates feature importance by replacing each input token embedding with a baseline (default: UNK) and computing the difference in output, producing coarse-grained attribution scores (1 per token).
-
LIME (Ribeiro et al. 2016) trains an interpretable surrogate model by sampling points around a specified input example and using model evaluations at these points to train a simpler interpretable ‘surrogate’ model, such as a linear model. We adapt the implementation by Atanasova et al. for usage in the generative setting.
Attribute bitsandbytes
Int8 Quantized Models (#163)
Since the 0.37 release of bitsandbytes
, efficient matrix multiplication backward is enabled for all int8-quantized models loaded with 🤗 Transformers. In this release we support attributing int8 models with attribution methods relying on a backward pass (e.g. integrated_gradients
, saliency
). In the following simple example, we attribute the generation steps of a quantized GPT-2 1.5B model using the input_x_gradient
method, with the whole process requiring less than 6GB of GPU RAM:
import inseq
from transformers import AutoModelForCausalLM
hf_model = AutoModelForCausalLM.from_pretrained("gpt2-xl", load_in_8bit=True, device_map="auto")
inseq_model = inseq.load_model(hf_model, "input_x_gradient", tokenizer="gpt2-xl")
out = inseq_model.attribute("Hello world, this is the Inseq", generation_args = {"max_new_tokens": 20})
out.show()
Contrastive and Uncertainty-weighted Attribution (#166)
This release introduces two new pre-registered step functions, contrast_prob_diff
and mc_dropout_prob_avg
.
-
contrast_prob_diff
computes the difference in probability between a generation target (e.g.All the dogs are barking loudly
) and a contrastive alternative (e.g.All the dogs are crying strongly
) at every generation step, with the constraint of having a 1-1 token correspondence between the two strings. If used asattributed_fn
inmodel.attribute
, it corresponds to the Contrastive Attribution setup by Yin and Neubig, 2022. See -
mc_dropout_prob_avg
computes an uncertainty-weighted estimate of each generated token's probabilities usingn_mcd_steps
of the Monte Carlo Dropout method. If used as an attributed function instead of vanillaprobability
it can produce more robust attribution scores at the cost of more computation.
See this tutorial in the documentation for a reference on how to register and use custom attributed functions.
Multilingual MT and Factual Information Location Examples (#166)
Inseq documentation contains two new examples:
-
Attributing Multilingual MT Models shows how to use Inseq to attribute the generations of multilingual MT models like M2M100 and NLLB, which require setting target language flags before generation.
-
Locating Factual Knowledge in GPT-2 shows how layer-specific attribution methods can be used to obtain intermediate attributions of language models like GPT-2. Using the quantized and contrastive attribution approaches described above, the example reproduces some observations made by Meng et al. 2022 on the localization of factual knowledge in large language models.
All Merged PRs
🚀 Features
- Add OcclusionAttribution and LimeAttribution (#145) @nfelnlp
bitsandbytes
compatibility (#163 (#163) @gsarti
🔧 Fixes & Refactoring
- Demo Release Changes (#166) @gsarti
- Fix EOS baseline for models with
pad_token_id
!= 0 (#165) @gsarti
👥 List of contributors
This release wouldn't have been possible without the contributions of these amazing folks. Thank you!
v0.3.3: Attention attribution, new aggregation, improved saving/reloading and more
What’s Changed
Attention attribution (#148 )
This release introduces a new category of attention attribution methods and adds support for AttentionAttribution
(id: attention
). This method attributes the generated outputs using raw attention weights extracted during the forward pass, as it was done inter alia by Jain and Wallace, 2019. The parameters heads
and layers
enable the choice of a single element (single int
), a range (with a tuple (start_idx, end_idx)
) or a set of custom valid indices (as [idx_1, idx_2, ...]
) for attention heads and model layers respectively. The aggregation of multiple heads or layers can be performed using one of the default aggregators (e.g. max
, average
) or by defining a custom function and passing it to aggregate_heads_fn
or aggregate_layers_fn
in the call of model.attribute()
.
Example of default usage:
import inseq
model = inseq.load_model("facebook/wmt19-en-de", "attention")
out = model.attribute("The developer argued with the designer because her idea cannot be implemented.")
The default behavior is set to minimize unnecessary parameter definitions. In the default case above, the result is the average across all attention heads of the final layer.
Example of advanced usage:
import inseq
model = inseq.load_model("facebook/wmt19-en-de", "attention")
out = model.attribute(
"The developer argued with the designer because her idea cannot be implemented.",
layers=(0, 5),
heads=[0, 2, 5, 7],
aggregate_heads_fn = "max"
)
In the case above, the outcome is a matrix of maximum attention weights of heads 0, 2, 5 and 7 after averaging their weights across the first 5 layers of the model.
Other attention methods will be added in the upcoming releases (see summary issue #108 )
L2 + Normalize default aggregation (#157)
Starting from this release, the default aggregation adopted to aggregate attribution scores at a token level for GradientFeatureAttributionSequenceOutput
objects is the L2 norm of the tensor over the hidden_size
dimension, followed by a step-wise normalization of the attributions (all attributions across source and target at every generation step will sum to one). This replaces the previous normalization approach, which was a simple sum over the hidden dimension followed by a division by the norm of the step attribution vector. Importantly, since the L2 norm is guaranteed to be a positive value, the resulting attribution scores will now always be positive (also for integrated_gradients
).
Motivations:
- Good empirical faithfulness of such aggregation procedure on transformer-based models shown by Bastings et al. 2022
- Improved understanding of the individual contribution of every input to the generation of the output by means of positivity and normalization.
Improved saving and reloading of attributions (#157)
When saving attribution outputs, now it is possible to obtain one file per sequence by specifying split_sequences=True
, and to automatically zip the generated outputs with compress=True
.
import inseq
model = inseq.load_model("Helsinki-NLP/opus-mt-en-it", "saliency")
out = model.attribute(["sequence one", "sequence number two"])
# Creates out_0.json.gz, out_1.json.gz
out.save("out.json.gz", split_sequences=True, compress=True)
Export attributions for usage with pandas
(#157)
The new method FeatureAttributionOutput.get_scores_dicts
allows to export source_attributions
, target_attributions
and step_scores
as dictionaries that can be easily loaded into pd.DataFrame
objects for further analysis (thanks @MoritzLaurer for raising the issue!). Example usage:
import inseq
import pandas as pd
model = inseq.load_model("Helsinki-NLP/opus-mt-en-it", "saliency")
out = model.attribute(
["Hello ladies and badgers!", "This is a test input"], attribute_target=True, step_scores=["probability", "entropy"]
)
# A list of dataframes (one per sequence) corresponding to source matrices in out.show
dfs = [pd.DataFrame(x["source_attributions"]) for x in out.get_scores_dicts()]
# A list of dataframes (one per sequence) corresponding to target matrices in out.show
dfs = [pd.DataFrame(x["target_attributions"]) for x in out.get_scores_dicts()]
# A list of dataframes (one per sequence) with step scores ids as rows and generated target tokens as columns
dfs = [pd.DataFrame(x["step_scores"]) for x in out.get_scores_dicts()]
ruff for style and quality checks (#159)
From this release Inseq drops flake8
, isort
, pylint
and pyupgrade
linting and moves to ruff
with the corresponding extensions for style and quality checks. This allows to dramatically speed up build
checks (from ~4 minutes to <1 second). Library developers are advised to integrate ruff
in their automatic checks during coding (VSCode extension and Pycharm plugin are available)
All Merged PRs
🚀 Features
ruff
stylechecking (#159) @gsarti- Minor fixes to 0.3.2 (#157) @gsarti
- Basic Attention attribution (#148) @lsickert
🔧 Fixes & Refactoring
- Fix build badge (#152) @gsarti
ruff
stylechecking (#159) @gsarti- Minor fixes to 0.3.2 (#157) @gsarti
- Fix conflicting generation args (#155) @gsarti
- Fix issues with pytorch 1.13 on MacOs (#151) @lsickert
📝 Documentation
👥 List of contributors
v0.3.1: First public release
What's Changed
Minor bug fixes of v0.3.0, with more extensive documentation for base classes
Full Changelog: v0.3.0...v0.3.1
v0.3.0
What's Changed
- Fixes to v0.2 and CLI by @gsarti in #134
- New CLI and Dataset attribute command by @gsarti in #135
- Attribute batching and generalized step scores by @gsarti in #136
- Custom attribution targets by @gsarti in #138
- Added
exec_time
to output info dict by @gsarti in #143 - Python 3.11 CI support by @gsarti in #146
- Decoder-only Attribution Models Support by @gsarti in #144
Full Changelog: v0.2.0...v0.3.0
v0.2.0
What's Changed
- Quality-of-life improvements for usage by @gsarti in #115
- [Bugfix] Fix GPU compatibility in AttributionModel by @gsarti in #118
- Add target-side feature attribution by @gsarti in #119
- Optional output probabilities when performing attribution by @gsarti in #120
- Consistency improvements and tests by @gsarti in #121
- Fix target attribution normalization, optional attribute EOS by @gsarti in #124
- Attribute eos default false, fix referenceless case by @gsarti in #125
- Conversion to nn.Module & Softmax added in forward by @gsarti in #126
- Added new
FeatureAttributionOutput
class by @gsarti in #129 Attribution
Classes refactoring,Aggregator
for postponed score aggregation by @gsarti in #130- Add
SubwordAggregator
andPairAggregator
by @gsarti in #131 - Improved documentation by @gsarti in #132
Full Changelog: v0.1.0...v0.2.0
v0.1.0
Initial pre-release of the inseq package
Full Changelog: https://github.com/inseq-team/inseq/commits/v0.1.0