Releases: kaiko-ai/eva
0.1.0
What's Changed
- Bump micro version by @ioangatop in #486
- update docs config links to point to v0.0.2 by @roman807 in #487
- Update paper citation by @ioangatop in #489
- Update the vision dataset return types to
tv_tensors
by @ioangatop in #478 - Refactor embeddings writer by @ioangatop in #461
- fixed phikon configs by @roman807 in #493
- Refactor core embeddings based datasets by @ioangatop in #495
- Add doc tests and minor fixes by @ioangatop in #492
- support setting download as env-variable by @roman807 in #514
- Rename
DOWNLOAD
env var toDOWNLOAD_DATA
by @ioangatop in #534 - Allow head model as
dict
and parse it as object inconfigure_model
by @ioangatop in #541 - Add support for WSI-level classification by @roman807 in #542
- Support semantic segmentation downstream evaluation tasks by @ioangatop in #517
- make sure multiwsidatasets are validated (updated) by @roman807 in #545
- Add
MoNuSac
segmentation dataset by @ioangatop in #549 - Add
CoNSeP
dataset with patching by @nkaenzig in #547 - Fix normalisation in
ResizeAndClamp
transform by @ioangatop in #555 - Update dependency jsonargparse to v4.30.0 by @renovate in #540
- Update dependency jsonargparse to v4.31.0 by @renovate in #560
- Update actions/checkout digest to 692973e by @renovate in #536
- Add support for offline total segmentator training by @ioangatop in #504
- Change
overwrite
default value to True inEmbeddingWriter
by @nkaenzig in #558 - Allow local checkpoint to be loaded in
TimmEncoder
by @ioangatop in #566 - Update main README by @ioangatop in #563
- Add
BCSS
dataset by @nkaenzig in #559 - Update segmentation metrics by @ioangatop in #571
- Add support for
phikon
as encoder for segmentation tasks by @ioangatop in #570 - Add
ignore_index
functionality in segmentation metrics by @ioangatop in #574 - Add
LiTS
radiology ct segmentation dataset and task by @ioangatop in #567 - Fix LiTS dataset validation by @ioangatop in #577
- Fix the embeddings saving dataset by @ioangatop in #579
- Remove
cpu
accelerator fromLiTS
config by @ioangatop in #581 - Add
Dice
segmentation loss by @ioangatop in #587 - 589 fix dice loss function for ignore index argument by @ioangatop in #590
- Fix
LiTS
transforms by @ioangatop in #594 - update leaderboard and documentation with WSI segmentation datasets by @roman807 in #596
- Use
core.models.wrappers
API to load encoder models by @nkaenzig in #598 - Add model registry for accessing backbones by name by @nkaenzig in #591
- Update .lock file to solve
exceptiongroup
dependency issue in CI by @nkaenzig in #605 - Add support for running
timm
models with default configs by @nkaenzig in #607 - Add
H-optimus-0
model to registry by @nkaenzig in #599 - Add
Prov-GigaPath
model to registry by @nkaenzig in #600 - Add
hibou
models to registry by @nkaenzig in #609 - Copy metadata entries to CPU in embeddings writer by @nkaenzig in #611
- Allow to call
timm
models withtimm/model_name
onBackboneModelRegistry
by @ioangatop in #613 - Make dataset root fully configurable through env variable by @nkaenzig in #617
- Bugfix for monusac worker issue by @nkaenzig in #619
- Set multiprocessing start method to
spawn
forEmbeddingsDataset
by @nkaenzig in #621 - Update segmentation visualisation default arguments to show one column per group by @ioangatop in #623
- Add
WandbLogger
by @nkaenzig in #628 - Set available auto download to
False
forcamelyon16
by @nkaenzig in #626 - Bump major version
1.0.0
by @ioangatop in #615 - Corrected number of classes in
CoNSeP
configs by @nkaenzig in #632 - Refactoring of forward method in model wrapper classes by @nkaenzig in #633
- Remove
dynamic_img_size
fromtimm/{timm_model}
registry function by @nkaenzig in #638 - Update leaderboard style to heatmap by @roman807 in #631
- Udpate
timm
&torchmetrics
dependency by @ioangatop in #640 - bump to 0.1.0 by @ioangatop in #643
Full Changelog: 0.0.2...0.1.0
Several improvements and fixes
What's Changed
- Remove CI badge by @roman807 in #345
- Minor fix on downloading configs instructions by @ioangatop in #344
- Update email in CoC by @ioangatop in #347
- Print results table at the end of an evaluation session by @nkaenzig in #337
- update installation instructions by @roman807 in #349
- Add
MultiEmbeddingsClassificationDataset
by @nkaenzig in #351 - Disable Loggers that don't support saving to remote storage at runtime by @nkaenzig in #352
- Fix typos and inconsistent formatting by @roman807 in #353
- Add gitleaks to CI workflow by @a-thiery in #357
- add dinov2 to documentation by @roman807 in #355
- update onnx to version 1.16.0 by @roman807 in #359
- Add UNI results & replication instructions by @roman807 in #362
- #366 additional Renovate settings by @b-abderrahmane in #367
- Update
lightning
to version2.2.2
by @roman807 in #371 - Reformat image segmentation datasets by @ioangatop in #380
- Pin dependencies by @renovate in #369
- Update pdm-project/setup-pdm digest to 568ddd6 by @renovate in #400
- Update wntrblm/nox action to v2024.04.15 by @renovate in #372
- Update actions/checkout digest to 0ad4b8f by @renovate in #399
- Update
TotalSegmentator
output binary masks to a semantic mask by @ioangatop in #390 - Add support for 8bit images in TotalSegmentator by @ioangatop in #398
- Add
read_nifti_slice
to optionally cast image to stored input type by @ioangatop in #392 - Update CI triggering events by @ioangatop in #406
- Move model freezing functionality to
configure_model
by @ioangatop in #409 - Update
DEVELOPER_GUIDE.md
by @nkaenzig in #418 - Update
TotalSegmentator2D
dataset to fetch all the slices by @ioangatop in #416 - Move metrics to CPU when using single device by @ioangatop in #446
- Remove total segmentator classification dataset by @ioangatop in #450
- updated eva logo by @roman807 in #454
- Update actions/checkout digest to a5ac7e5 by @renovate in #458
- Add configuration logger by @ioangatop in #466
- Update
README
with paper citation by @ioangatop in #474 - fix config link in docs by @roman807 in #482
- Update img shields of README by @ioangatop in #480
- Fix
torch
andjsonargparse
versions by @ioangatop in #483
New Contributors
- @a-thiery made their first contribution in #357
- @b-abderrahmane made their first contribution in #367
Full Changelog: 0.0.1...0.0.2
v0.0.1 - First eva release
Oncology FM Evaluation Framework by kaiko.ai
Installation •
How To Use •
Documentation •
Datasets •
Benchmarks
Contribute •
Acknowledgements
eva
is an evaluation framework for oncology foundation models (FMs) by kaiko.ai.
Check out the documentation for more information.
Highlights:
- Easy and reliable benchmark of Oncology FMs
- Automatic embedding inference and evaluation of a downstream task
- Native support of popular medical datasets and models
- Produce statistics over multiple evaluation fits and multiple metrics
Installation
Simple installation from PyPI:
# to install the core version only
pip install kaiko-eva
# to install the expanded `vision` version
pip install 'kaiko-eva[vision]'
# to install everything
pip install 'kaiko-eva[all]'
To install the latest version of the main
branch:
pip install "kaiko-eva[all] @ git+https://github.com/kaiko-ai/eva.git"
You can verify that the installation was successful by executing:
eva --version
How To Use
eva can be used directly from the terminal as a CLI tool as follows:
eva {fit,predict,predict_fit} --config url/or/path/to/the/config.yaml
When used as a CLI tool, eva supports configuration files (.yaml
) as an argument to define its functionality.
Native supported configs can be found at the configs directory
of the repo. Apart from cloning the repo, you can download the latest config folder as .zip
from your browser from
here. Alternatively,
from a specific release the configs can be downloaded from the terminal as follows:
curl -LO https://github.com/kaiko-ai/eva/releases/download/0.0.1/configs.zip | unzip configs.zip
For example, to perform a downstream evaluation of DINO ViT-S/16 on the BACH dataset with
linear probing by first inferring the embeddings and performing 5 sequential fits, execute:
# from a locally stored config file
eva predict_fit --config ./configs/vision/dino_vit/offline/bach.yaml
# from a remote stored config file
eva predict_fit --config https://raw.githubusercontent.com/kaiko-ai/eva/main/configs/vision/dino_vit/offline/bach.yaml
Note
All the datasets that support automatic download in the repo have by default the option to automatically download set to false.
For automatic download you have to manually set download=true.
To view all the possibles, execute:
eva --help
For more information, please refer to the documentation
and tutorials.
Benchmarks
In this section you will find model benchmarks which were generated with eva.
Table I: WSI patch-level benchmark
Model | BACH | CRC | MHIST | PCam/val | PCam/test |
---|---|---|---|---|---|
ViT-S/16 (random) [1] | 0.410 | 0.617 | 0.501 | 0.753 | 0.728 |
ViT-S/16 (ImageNet) [1] | 0.695 | 0.935 | 0.831 | 0.864 | 0.849 |
ViT-B/8 (ImageNet) [1] | 0.710 | 0.939 | 0.814 | 0.870 | 0.856 |
DINO(p=16) [2] | 0.801 | 0.934 | 0.768 | 0.889 | 0.895 |
Phikon [3] | 0.725 | 0.935 | 0.777 | 0.912 | 0.915 |
ViT-S/16 (kaiko.ai) [4] | 0.797 | 0.943 | 0.828 | 0.903 | 0.893 |
ViT-S/8 (kaiko.ai) [4] | 0.834 | 0.946 | 0.832 | 0.897 | 0.887 |
ViT-B/16 (kaiko.ai) [4] | 0.810 | 0.960 | 0.826 | 0.900 | 0.898 |
ViT-B/8 (kaiko.ai) [4] | 0.865 | 0.956 | 0.809 | 0.913 | 0.921 |
ViT-L/14 (kaiko.ai) [4] | 0.870 | 0.930 | 0.809 | 0.908 | 0.898 |
Table I: Linear probing evaluation of FMs on patch-level downstream datasets.
We report averaged balanced accuracy
over 5 runs, with an average standard deviation of ±0.003.
References:
- "Emerging properties in self-supervised vision transformers”
- "Benchmarking self-supervised learning on diverse pathology datasets”
- "Scaling self-supervised learning for histopathology with masked image modeling”
- "Towards Training Large-Scale Pathology Foundation Models: from TCGA to Hospital Scale”
Contributing
eva is an open source project and welcomes contributions of all kinds. Please checkout the developer
and contributing guide for help on how to do so.
All contributors must follow the code of conduct.
Acknowledgements
Our codebase is built using multiple opensource contributions