Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

decoding without the language model #167

Open
HariKrishna-Vydana opened this issue Jan 1, 2018 · 2 comments
Open

decoding without the language model #167

HariKrishna-Vydana opened this issue Jan 1, 2018 · 2 comments

Comments

@HariKrishna-Vydana
Copy link

is there a way to decode without considering the influence of language model

@fmetze
Copy link
Contributor

fmetze commented Jan 1, 2018

Greedy decoding? You can simply search for the sequence of peaks in the NN output. Or you could create a fake ARPA file format LM that has unity transition probabilities for all words, like a grammar?

@ramonsanabria
Copy link

In:

https://github.com/srvk/eesen/blob/tf_clean/tf/ctc-am/test.py

You have --compute_ter that can give you the token error rate (without language model)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants