You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
We could try to show the number of tokens in the assistants and the chat. However, there are several challenges:
Different models use different tokenizers. As I write this, OpenAI has documented three different tokenizers on its website. LLaMA uses another one, etc.
The number of tokens depends on the language. For English, there are good estimates that could be used instead of a tokenizer. Quoting OpenAI: "A helpful rule of thumb is that one token generally corresponds to ~4 characters of text for common English text. This translates to roughly ¾ of a word (so 100 tokens ~= 75 words)." Not every text is written in English, though.
From a performance perspective, it is desirable not to use the actual tokenizer but a good approximation.
For C#, there is the library SharpToken and alternatively TikTokenSharp. However, TikTokenSharp loads data from the internet, which I don't want for AI Studio. The developers of SharpToken indicate that Microsoft has now developed the library Microsoft.ML.Tokenizers. The Microsoft library is supposed to be very performant.
We will continue to monitor the situation. We may need to implement a token estimator and show the result as an estimate in the UI.
The text was updated successfully, but these errors were encountered:
We could try to show the number of tokens in the assistants and the chat. However, there are several challenges:
Different models use different tokenizers. As I write this, OpenAI has documented three different tokenizers on its website. LLaMA uses another one, etc.
The number of tokens depends on the language. For English, there are good estimates that could be used instead of a tokenizer. Quoting OpenAI: "A helpful rule of thumb is that one token generally corresponds to ~4 characters of text for common English text. This translates to roughly ¾ of a word (so 100 tokens ~= 75 words)." Not every text is written in English, though.
From a performance perspective, it is desirable not to use the actual tokenizer but a good approximation.
For C#, there is the library SharpToken and alternatively TikTokenSharp. However,
TikTokenSharp
loads data from the internet, which I don't want for AI Studio. The developers ofSharpToken
indicate that Microsoft has now developed the libraryMicrosoft.ML.Tokenizers
. The Microsoft library is supposed to be very performant.We will continue to monitor the situation. We may need to implement a token estimator and show the result as an estimate in the UI.
The text was updated successfully, but these errors were encountered: