You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I am practicing implementing a Transformer model that machine translates English into Korean by reading TensorFlow guides and books. Currently, I am experiencing an issue where a custom model containing text.SentencepieceTokenizer is not detokenized properly when saved and reloaded. The English Sentencepiece Tokenizer works without any problem, but the Korean Sentencepiece Tokenizer detokenizes strangely when saved and reloaded. An example is below:
In: export_translator("tom should have stayed in boston.").numpy().decode('utf-8')
Out: '톰은 보스턴에 있어야 했는데. '
In: tf.saved_model.save(export_translator, export_dir='./translator')
In: reloaded = tf.saved_model.load("./translator")
In: reloaded("tom should have stayed in boston.").numpy().decode('utf-8')
Out: 'in dogha boring week proof peace'
I am using Tensorflow 2.17.0 on Google Colab. Full code and output can be found in the Google Colab link below. You can easily reproduce it by running it with Ctrl + F9 in Google Colab. The execution time of the entire code is approximately 5 minutes ~ 5 minutes and 30 seconds on a T4 GPU. That issue is at the bottom.
I'm really sorry for not writing the comments in English. Although not all of them were edited, some comments were edited to English rather than Korean. I will edit the remaining parts so that you do not have any inconvenience while reading them.
The text was updated successfully, but these errors were encountered:
Hi, everyone.
I am practicing implementing a Transformer model that machine translates English into Korean by reading TensorFlow guides and books. Currently, I am experiencing an issue where a custom model containing text.SentencepieceTokenizer is not detokenized properly when saved and reloaded. The English Sentencepiece Tokenizer works without any problem, but the Korean Sentencepiece Tokenizer detokenizes strangely when saved and reloaded. An example is below:
I am using Tensorflow 2.17.0 on Google Colab. Full code and output can be found in the Google Colab link below. You can easily reproduce it by running it with Ctrl + F9 in Google Colab. The execution time of the entire code is approximately 5 minutes ~ 5 minutes and 30 seconds on a T4 GPU. That issue is at the bottom.
Colab Link: https://colab.research.google.com/drive/1IMFWoJ1s5ReKU9LYENROpAsZ47D6cG8T?usp=sharing
The data I used is 'kor-eng.zip' located at "https://www.manythings.org/anki/".
I'm really sorry for not writing the comments in English. Although not all of them were edited, some comments were edited to English rather than Korean. I will edit the remaining parts so that you do not have any inconvenience while reading them.
The text was updated successfully, but these errors were encountered: