Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

No EOD Tokens in EleutherAI/pile-deduped-pythia-preshuffled #175

Open
markschoene opened this issue Oct 4, 2024 · 1 comment
Open

No EOD Tokens in EleutherAI/pile-deduped-pythia-preshuffled #175

markschoene opened this issue Oct 4, 2024 · 1 comment

Comments

@markschoene
Copy link

According to the 20B_tokenizer.json, the end of document (EOD) token has id 0 and is denoted <|endoftext|>. Some people have raised in ealier issues that there are no EOD tokens in EleutherAI/pile-deduped-pythia-preshuffled. This issue seems to be standing since January 2024. I processed the pile as instructed in the README.md based on EleutherAI/pile-deduped-pythia-preshuffled. However, I can confirm that there appear to be no EOD tokens in the dataset. Beyond using the batch_viewer.py, I also started a training loop and recorded x.min() at the beginning of my forward(x) function. Both methods show that the smallest token ID is 2. From this I conclude that there are no EOD tokens in this version of the dataset. This causes serious issues for both training and evaluating on other datasets that use the EOD token (which never got gradient updates for a model trained on EleutherAI/pile-deduped-pythia-preshuffled). Would it be possible to provide a tokenized pile with 2049 tokens per sequence that does separate documents with EODs?

@markschoene
Copy link
Author

This issues was mentioned without replies from the team here: #123 (comment)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant