Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Track contamination in new leaderboard #1556

Open
1 of 4 tasks
Muennighoff opened this issue Dec 5, 2024 · 7 comments
Open
1 of 4 tasks

Track contamination in new leaderboard #1556

Muennighoff opened this issue Dec 5, 2024 · 7 comments
Labels
leaderboard issues related to the leaderboard

Comments

@Muennighoff
Copy link
Contributor

Muennighoff commented Dec 5, 2024

Just discussed this with @sivareddyg & others, but I think once we have this, many people would be willing to help us do all the annotations. Ideally it is super easy to add contamination annotations via a simple PR

  • Add easy way to add training data metadata of models to
  • Modify UI to select whether contaminated models are allowed or not
  • Modify UI to maybe always display a small symbol next to numbers which are contaminated based on the metadata
  • Maybe add small section here on how to add contamination annotations
@Muennighoff Muennighoff added the leaderboard issues related to the leaderboard label Dec 5, 2024
@KennethEnevoldsen
Copy link
Contributor

So you are suggesting to replace:

class ModelMeta(BaseModel):
    name: str | None
    revision: str | None
    ...
    zero_shot_benchmarks: list[str] | None = None
    citation: str | None = None

with

class ModelMeta(BaseModel):
    name: str | None
    revision: str | None
    ...
    training_datasets: list[str] | None = None
    citation: str | None = None

And rely on users to annotate it? I would be up for that (@x-tabdeveloping this is also more in line with your idea)

We can extrapolate zero_shot_benchmarks from training datasets. What do we do for models where the training data is not known?

UI is pretty simple once we have it

@x-tabdeveloping
Copy link
Collaborator

Yeah, once we've implemented the logic I can add it to the UI

@isaac-chung
Copy link
Collaborator

Does that mean that #1372 is obsolete? If so we can close it.

@KennethEnevoldsen
Copy link
Contributor

Added a PR

KennethEnevoldsen added a commit that referenced this issue Dec 8, 2024
* fix: Add training dataset to model meta

Adresses #1556

* Added docs

* format
isaac-chung pushed a commit that referenced this issue Dec 9, 2024
* fix: Count unique texts, data leaks in calculate metrics (#1438)
* add more stat
* add more stat
* update statistics
* fix: update task metadata to allow for null (#1448)
* Update tasks table
* 1.19.5
Automatically generated by python-semantic-release
* Fix: Made data parsing in the leaderboard figure more robust (#1450)
Bugfixes with data parsing in main figure
* Fixed task loading (#1451)
* Fixed task result loading from disk
* Fixed task result loading from disk
* fix: publish (#1452)
* 1.19.6
Automatically generated by python-semantic-release
* fix: Fix load external results with `None` mteb_version (#1453)
* fix
* lint
* 1.19.7
Automatically generated by python-semantic-release
* WIP: Polishing up leaderboard UI (#1461)
* fix: Removed column wrapping on the table, so that it remains readable
* Added disclaimer to figure
* fix: Added links to task info table, switched out license with metric
* fix: loading pre 1.11.0 (#1460)
* small fix
* fix: fix
* 1.19.8
Automatically generated by python-semantic-release
* fix: swap touche2020 to maintain compatibility (#1469)
swap touche2020 for parity
* 1.19.9
Automatically generated by python-semantic-release
* docs: Add sum per language for task counts (#1468)
* add sum per lang
* add sort by sum option
* make lint
* fix: pinned datasets to <3.0.0 (#1470)
* 1.19.10
Automatically generated by python-semantic-release
* feat: add CUREv1 retrieval dataset (#1459)
* feat: add CUREv1 dataset
---------
Co-authored-by: nadshe <[email protected]>
Co-authored-by: olivierr42 <[email protected]>
Co-authored-by: Daniel Buades Marcos <[email protected]>
* feat: add missing domains to medical tasks
* feat: modify benchmark tasks
* chore: benchmark naming
---------
Co-authored-by: nadshe <[email protected]>
Co-authored-by: olivierr42 <[email protected]>
* Update tasks table
* 1.20.0
Automatically generated by python-semantic-release
* fix: check if `model` attr of model exists (#1499)
* check if model attr of model exists
* lint
* Fix retrieval evaluator
* 1.20.1
Automatically generated by python-semantic-release
* fix: Leaderboard demo data loading (#1507)
* Made get_scores error tolerant
* Added join_revisions, made get_scores failsafe
* Fetching metadata fixed fr HF models
* Added failsafe metadata fetching to leaderboard code
* Added revision joining to leaderboard app
* fix
* Only show models that have metadata, when filter_models is called
* Ran linting
* 1.20.2
Automatically generated by python-semantic-release
* fix: leaderboard only shows models that have ModelMeta (#1508)
Filtering for models that have metadata
* 1.20.3
Automatically generated by python-semantic-release
* fix: align readme with current mteb (#1493)
* align readme with current mteb
* align with mieb branch
* fix test
* 1.20.4
Automatically generated by python-semantic-release
* docs: Add lang family mapping and map to task table (#1486)
* add lang family mapping and map to task table
* make lint
* add back some unclassified lang codes
* Update tasks table
* fix: Ensure that models match the names on embedding-benchmarks/results (#1519)
* 1.20.5
Automatically generated by python-semantic-release
* fix: Adding missing metadata on models and mathcing names up with the results repo (#1528)
* Added Voyage 3 models
* Added correct metadata to Cohere models and matched names with the results repo
* 1.20.6
Automatically generated by python-semantic-release
* feat: Evaluate missing splits (#1525)
* fix: evaluate missing splits (#1268)
* implement partial evaluation for missing splits
* lint
* requested changes done from scratch
* test for missing split evaluation added
* uncomment test
* lint
* avoid circular import
* use TaskResult
* skip tests for now
---------
Co-authored-by: Isaac Chung <[email protected]>
* got test_all_splits_evaluated passing
* tests passing
* address review comments
* make lint
* handle None cases for kg_co2_emissions
* use new results info
---------
Co-authored-by: Thivyanth <[email protected]>
* 1.21.0
Automatically generated by python-semantic-release
* fix: Correct typos superseeded -> superseded (#1532)
fix typo -> superseded
* 1.21.1
Automatically generated by python-semantic-release
* fix: Task load data error for SICK-BR-STS and XStance (#1534)
* fix task load data for two tasks
* correct dataset keys
* 1.21.2
Automatically generated by python-semantic-release
* fix: Proprietary models now get correctly shown in leaderboard (#1530)
* Fixed showing proprietary models in leaderboard
* Added links to all OpenAI models
* Fixed table formatting issues
* Bumped Gradio version
* 1.21.3
Automatically generated by python-semantic-release
* docs: Add Model Meta parameters and metadata (#1536)
* add multi_qa_MiniLM_L6_cos_v1 model meta
* add all_mpnet_base_v2
* add parameters to model meta
* make lint
* add extra params to meta
* fix: add more model meta (jina, e5) (#1537)
* add e5 model meta
* address review comments
* 1.21.4
Automatically generated by python-semantic-release
* Add cohere models (#1538)
* fix: bug cohere names
* format
* fix: add nomic models (#1543)
#1515
* fix: Added all-minilm-l12-v2 (#1542)
#1515
* fix: Added arctic models (#1541)
#1515
* fix: add sentence trimming to OpenAIWrapper (#1526)
* fix: add sentence trimming to OpenAIWrapper
* fix: import tiktoken library inside encode function
* fix: check tokenizer library installed and update ModelMeta to pass tokenizer_name
* fix: pass tokenizer_name, max_tokens to loader
* fix: make tokenizer_name None for default
* fix: delete changes for ModelMeta
* fix: fix revision to 2 for OpenAI models
* fix: add docstring for OpenAIWrapper
* fix: lint
* feat: add openai optional dependency set
* fix: add sleep for too many requests
* fix: add lint
* fix: delete evaluate file
* 1.21.5
Automatically generated by python-semantic-release
* fix: Fixed metadata errors (#1547)
* 1.21.6
Automatically generated by python-semantic-release
* fix: remove curev1 from multlingual (#1552)
Seems like it was added here:
1cc6c9e
* 1.21.7
Automatically generated by python-semantic-release
* fix: Add Model2vec (#1546)
* Added Model2Vec wrapper
* Added Model2vec models
* Added model2vec models to registry
* Added model2vec as a dependency
* Ran linting
* Update mteb/models/model2vec_models.py
Co-authored-by: Kenneth Enevoldsen <[email protected]>
* Update mteb/models/model2vec_models.py
Co-authored-by: Kenneth Enevoldsen <[email protected]>
* Added adapted_from and superseeded_by to model2vec models.
* Added missing import
* Moved pyproject.toml to optional dependencies
* Fixed typos
* Added import error and changed model to model_name
* Added Numpy to frameworks
* Added Numpy to frameworks
* Corrected false info on model2vec models
* Replaced np.inf with maxint
* Update mteb/models/model2vec_models.py
Co-authored-by: Isaac Chung <[email protected]>
* Added option to have infinite max tokens, added it to Model2vec
---------
Co-authored-by: Kenneth Enevoldsen <[email protected]>
Co-authored-by: Isaac Chung <[email protected]>
* Made result loading more permissive, changed eval splits for HotPotQA and DBPedia (#1554)
* Removed train and dev from eval splits on HotpotQA
* Removed dev from eval splits on DBPedia
* Made task_results validation more permissive
* Readded exception in get_score
* Ran linting
* 1.21.8
Automatically generated by python-semantic-release
* docs: Correction of SICK-R metadata (#1558)
* Correction of SICK-R metadata
* Correction of SICK-R metadata
---------
Co-authored-by: rposwiata <[email protected]>
* feat(google_models): fix issues and add support for `text-embedding-005` and `text-multilingual-embedding-002` (#1562)
* fix: google_models batching and prompt
* feat: add text-embedding-005 and text-multilingual-embedding-002
* chore: `make lint` errors
* fix: address PR comments
* 1.22.0
Automatically generated by python-semantic-release
* fix(bm25s): search implementation (#1566)
fix: bm25s implementation
* 1.22.1
Automatically generated by python-semantic-release
* docs: Fix dependency library name for bm25s (#1568)
* fix: bm25s implementation
* correct library name
---------
Co-authored-by: Daniel Buades Marcos <[email protected]>
* fix: Add training dataset to model meta (#1561)
* fix: Add training dataset to model meta
Adresses #1556
* Added docs
* format
* feat: (cohere_models) cohere_task_type issue, batch requests and tqdm for visualization (#1564)
* feat: batch requests to cohere models
* fix: use correct task_type
* feat: use tqdm with openai
* fix: explicitely set `show_progress_bar` to False
* fix(publichealth-qa):  ignore rows with `None` values in `question` or `answer` (#1565)
* 1.23.0
Automatically generated by python-semantic-release
* fix wongnai
* update inits
* fix tests
* lint
* update imports
* fix tests
* lint
---------
Co-authored-by: Kenneth Enevoldsen <[email protected]>
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
Co-authored-by: github-actions <[email protected]>
Co-authored-by: Márton Kardos <[email protected]>
Co-authored-by: Isaac Chung <[email protected]>
Co-authored-by: Napuh <[email protected]>
Co-authored-by: Daniel Buades Marcos <[email protected]>
Co-authored-by: nadshe <[email protected]>
Co-authored-by: olivierr42 <[email protected]>
Co-authored-by: Thivyanth <[email protected]>
Co-authored-by: Youngjoon Jang <[email protected]>
Co-authored-by: Rafał Poświata <[email protected]>
@Muennighoff
Copy link
Contributor Author

Here's an example for displaying contamination: https://huggingface.co/spaces/allenai/reward-bench though we probably want to be more finegrained

@x-tabdeveloping
Copy link
Collaborator

Thanks for implementing the metadata attribute for this.
I have a number of questions and concerns relating to this.

1. We have limited information

As it currently stands, only 18 of the 187 models that have associated metadata in MTEB have any annotation regarding what they were trained on, and I suspect that some of these are quite incomplete.

I personally don't think it makes a lot of sense to implement a filter in the leaderboard until we add this data on more models.

I'm also wondering who should be made responsible for doing this. What happens with models, for which this information is not published? Can we push people to give us information about their training data?

2. Indirect contamination

It might be the case that certain datasets that were used for training a model contain examples from another dataset's test or dev split, or contain the other dataset altogether. What are we planning to do in such cases, do we ignore this?

Some training datasets might be generated with LLMs, that might have seen the original source. Do we care about this?

3. Pretraining datasets

Models like e5-mistral are based on a model that was pretrained on vast amounts of data, and is likely to have seen some of the test data in raw form from certain datasets.
Is this something we consider to be problematic, and if so how are we planning to account for this?

@KennethEnevoldsen
Copy link
Contributor

KennethEnevoldsen commented Dec 16, 2024

I personally don't think it makes a lot of sense to implement a filter in the leaderboard until we add this data on more models.

Let us add the filter that removed model that know are not zero-shot (then people will add the annotations as it goes)

Do we care about this?

Not now

Is this something we consider to be problematic, and if so how are we planning to account for this?

It would also be leakage if e.g. test data is included.

(there could be a paper on determining leakage)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
leaderboard issues related to the leaderboard
Projects
None yet
Development

No branches or pull requests

4 participants