All notable changes to this project will be documented in this file.
The format is based on Keep a Changelog and this project adheres to Semantic Versioning.
- General
CorpusQueryCollator
for BEIR style dataset training or hard negative training. This deprecatesHardNegCollator
but all changes to the training loop are made for a seemless update.
- Updates BiPali config files
- Removed query augmentation tokens from BiQwen2Processor
- Modified XQwen2Processor to place
<|endoftext|>
token at the end of the document prompt (non-breaking for ColQwen but helps BiQwen). - Removed
add_suffix
in the VisualRetrieverCollator and let thesuffix
be added in the individual processors. - Changed the incorrect
<pad>
token to<|endoftext|>
fo query augmentationColQwen2Processor
. Note that previous models were trained with<|endoftext|>
so this is simply a non-breaking inference upgrade patch.
- Add BiQwen2 model
- Modified ColQwen and BiQwen to prevent the useless forward pass in the last layer of the original model (classification head)
- Bumped "breaking" dependencies on MTEB and Transformers version and made the corresponding changes in the code
- Casted Image dtype in ColPali due to breaking 4.46 transformers update
- Added a "num_image_tokens" kwarg to the
ColQwen2Processor
to allow for different image resolutions
- Fix wrong variable name for
ColPaliProcessor
's prefixes
- Restore, refactor, and improve
interpretability
module for generating similarity maps
- Remove dummy image from
ColPaliProcessor.process_queries
- Fix the
compute_hardnegs.py
script
- Add missing
model.eval()
in tests - Add tests for ColQwen2
- Add module-level imports for collators
- Add sanity check in the run inference example script
- Add E2E test for ColPali
- Add Qwen2-VL support
- Improve code clarity the run inference example script
- Subset the example dataset in the run inference example script
- Rename scorer test to
test_processing_utils
- Greatly simplify routing logic in Trainer selection and when feeding arguments to the model forward pass (refacto)
- Removed class
ContrastiveNegativeTrainer
which is now just integrated in ContrastiveTrainer. This should not affect the user-facing API. - Bumped transformers version to 4.45.0 to get Qwen2-VL support
- Import HardNegCollator at module-level if and only if datasets is available
- Remove the need for
typer
in the run inference example script - Fix edge case when empty suffix
""
given to processor - Fix bug in HardNegCollator since 0.3.0
✨ This release is an exhaustive package refacto, making ColPali more modular and easier to use.
🚨 It is NOT backward-compatible with previous versions.
- Restructure the
utils
module - Restructure the model training code
- Add custom
Processor
classes to easily process images and/or queries - Enable module-level imports
- Add scoring to processor
- Add
CustomRetrievalEvaluator
- Add missing typing
- Add tests for model, processor, scorer, and collator
- Lint
Changelog
- Add missing docstrings
- Add "Ruff" and "Test" CI pipelines
- Restructure all modules to closely follow the
transformers
architecture - Hugely simplify the collator implementation to make it model-agnostic
ColPaliProcessor
'sprocess_queries
doesn't need a mock image input anymore- Clean
pyproject.toml
- Loosen the required dependencies
- Replace
black
with theruff
linter
- Remove
interpretability
andeval_manager
modules - Remove unused utils
- Remove
TextRetrieverCollator
- Remove
HardNegDocmatixCollator
- Fix wrong PIL import
- Fix dependency issues
- Remove forced "cuda" usage in Retrieval Evaluator
Patch query preprocessing helper function disalignement with training scheme.
- Add 10 extra pad token by default to the query to act as reasoning buffers. This was added in the collator but not the external helper function for inference purposes.
Large refactoring to adress several issues and add features. This release is not backward compatible with previous versions. The models trained under this version will exhibit degraded performance if used with the previous version of the code and vice versa.
- Added multiple training options for training with hard negatives. This leads to better model performance !
- Added options for restarting training from a checkpoint.
- Optionally load ColPali models from pre-initialized backbones of the same shape to remove any stochastic initialization when loading adapters. This fixes 11 and 17.
- Set padding side to right in the tokenizer to fix misalignement issue between different query lengths in the same batch. Fixes 12
- Add 10 extra pad token by default to the query to act as reasoning buffers. This enables the above fix to be made without degrading performance and cleans up the old technique of using
<unused>
tokens.
Minor patch release to fix packaging issues.
- Branch Fix .gitignore to include all necessary files in the package.
Initial code release corresponding to the paper.