Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Faster cleanup of sharded datasets #367

Merged

Conversation

tom-pollak
Copy link
Contributor

@tom-pollak tom-pollak commented Nov 10, 2024

Description

Builds upon #321. Previously I used dataset.save_to_disk to write the final dataset, but this will rewrite the entire dataset to disk, which is very slow. Instead I manually move the shards to the standard hf format which allows us not to resave the entire dataset.

Type of change

  • Performance improvement

This should be externally identical, but just saving the dataset faster. My previous cached activation runner tests validate the change well.

Benchmarking cached activation benchmark on my mac:

poetry run py.test tests/benchmark/test_cache_activations_runner.py::test_cache_activations_runner -s

Old: Caching activations took: 81.5707

New: Caching activations took: 68.0761

I expect this to be even more pronounced for larger runs, where computing activations scales faster than disk speed.

Checklist:

  • I have commented my code, particularly in hard-to-understand areas
  • I have made corresponding changes to the documentation
  • My changes generate no new warnings
  • I have added tests that prove my fix is effective or that my feature works
  • New and existing unit tests pass locally with my changes
  • I have not rewritten tests relating to key interfaces which would affect backward compatibility

You have tested formatting, typing and unit tests (acceptance tests not currently in use)

  • I have run make check-ci to check format and linting. (you can run make format to format code if needed.)

previously I used dataset.save_to_disk to write the final dataset, but
this can be slow. Instead I manually move the shards to the standard hf
format which allows us not to resave the entire dataset
@tom-pollak tom-pollak marked this pull request as ready for review November 10, 2024 12:36
@tom-pollak
Copy link
Contributor Author

cc @chanind

@chanind
Copy link
Collaborator

chanind commented Nov 10, 2024

Nice, so this uses less disk space as well? Looks good!

@chanind chanind merged commit a3663b7 into jbloomAus:main Nov 10, 2024
5 checks passed
@tom-pollak tom-pollak deleted the dev/faster-sharded-dataset-cleanup branch November 10, 2024 19:13
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants