Writing Spell-checker is a challenge due to the complicated nature of searching and matching correct matches of user written words. Thus, providing correct spellings.
StackOverflow and Wikipedia offer in-depth knowledge in writing a spell-checker but are quite limited and outdated. It’s quite difficult to reverse engineer the spell checker ones present in Windows 11 by default. Including spell-checker has fundamental rules and later others tweak them to make it faster and add more features.
Establishing no prior rules how it should work, and present Neural Spell Checkers being prominent in Grammarly and Google Docs, I went on quest to write a spell checker from scratch using my understanding of Algorithms specifically LeetCode problems I solved in the past couple of weeks and reading through Articles.
A unique and similar structure of past and present spell checkers were derived while designing this Algorithm. Starting
- A Prefix Tree search Algorithm to load every word from the dictionary into its respective nodes
- Iterating over the words and using Longest Common Sequence (LCS) to find most likely match for the input word
- Edit Distance (Levenshtein Distance) to determine the perfect match and returning result.
Word with highest LCS and lowest Edit distance is determined as a perfect match.
Practising LeetCode concepts in a real world project is valuable. Including better understanding how different Algorithms come into play and work to solve a critical problem in a unified manner. And showcasing problem solving and creativity traits.
- Edit Distance
- Longest Common Sequence
- Implement a Trie
- Longest Common Prefix
These were part of LeetCode 75 Problems. https://leetcode.com/studyplan/leetcode-75/
- Classical SpellChecker
- Improved SpellChecker
Classical SpellChecker: (Edit Distance + Longest Common Sequence + Prefix Trie)
Improved SpellChecker: (Hamming Distance + Longest Common Prefix + Longest Common Sequence + Prefix Trie
- Added Multiple types of SpellChecker
- BERT next word predictor
- LSTM next word predictor (Tensorflow trained model)
- Embedding Spell Checker (experimental)
- Gemma Spell Checker
Embedding SpellChecker still in its infancy and does not provide a concrete alternative to pure Algorithm spellcheckers. Maybe there's some work in (logic) needed.
I’ve added the model files and tokenizers on Hugging Face. In the repository, code for inference with the model can be found. Raise an issue if errors happen.
Model Card: https://huggingface.co/sleeping4cat/Tensorflow-next-word-predictor
- I don’t provide UI and interface to interact with the algorithm. A terminal interface needs to be utilised.
- A Public dictionary dataset was used.
- The clearest explanation of Spell Checker Algorithm and Code on the Internet.
- Time Complexity O(n * m * (m + m))
Consider star the repository if it helps you!