Skip to content

Commit

Permalink
Format the code
Browse files Browse the repository at this point in the history
  • Loading branch information
abuelnasr0 committed Feb 20, 2024
1 parent 9b88773 commit 0fe9419
Showing 1 changed file with 5 additions and 5 deletions.
10 changes: 5 additions & 5 deletions keras_nlp/tokenizers/word_piece_tokenizer.py
Original file line number Diff line number Diff line change
Expand Up @@ -262,11 +262,11 @@ class WordPieceTokenizer(tokenizer.Tokenizer):
oov_token: str. The string value to substitute for
an unknown token. It must be included in the vocab.
Defaults to `"[UNK]"`.
special_tokens: list. A list of strings that will never be split during
the word-level splitting applied before the word-peice encoding.
This can be used to ensure special tokens map to unique indices in
the vocabulary, even if these special tokens contain splittable
characters such as punctuation. Special tokens must still be
special_tokens: list. A list of strings that will never be split during
the word-level splitting applied before the word-peice encoding.
This can be used to ensure special tokens map to unique indices in
the vocabulary, even if these special tokens contain splittable
characters such as punctuation. Special tokens must still be
included in `vocabulary`. Defaults to `None`.
References:
Expand Down

0 comments on commit 0fe9419

Please sign in to comment.