You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I understand that the WFST method requires both a lexicon and a LM ( as n-gram frequencies)
But in the README it is mentioned that a character level RNN-LM can be used instead without the need for a lexicon.
I see code to train a char level rnn-lm. But how is it used while decoding ?
The text was updated successfully, but these errors were encountered:
Siddharth, did you add documentation for this to the Eesen repository yet? Thanks, Florian
On Mar 8, 2018, at 12:58 AM, Minesh Mathew ***@***.***> wrote:
I understand that the WFST method requires both a lexicon and a LM ( as n-gram frequencies)
But in the README it is mentioned that a character level RNN-LM can be used instead without the need for a lexicon.
I see code to train a char level rnn-lm. But how is it used while decoding ?
—
You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub <#175>, or mute the thread <https://github.com/notifications/unsubscribe-auth/AEnA8QK2MFmpUsWcH7H9DWGgxYYmA9Mrks5tcMh4gaJpZM4SiLro>.
I understand that the WFST method requires both a lexicon and a LM ( as n-gram frequencies)
But in the README it is mentioned that a character level RNN-LM can be used instead without the need for a lexicon.
I see code to train a char level rnn-lm. But how is it used while decoding ?
The text was updated successfully, but these errors were encountered: