NeMo is a toolkit for creating Conversational AI applications.
NeMo toolkit makes it possible for researchers to easily compose complex neural network architectures for conversational AI using reusable components - Neural Modules. Neural Modules are conceptual blocks of neural networks that take typed inputs and produce typed outputs. Such modules typically represent data layers, encoders, decoders, language models, loss functions, or methods of combining activations.
The toolkit comes with extendable collections of pre-built modules and ready-to-use models for automatic speech recognition (ASR), natural language processing (NLP) and text synthesis (TTS). Built for speed, NeMo can utilize NVIDIA's Tensor Cores and scale out training to multiple GPUs and multiple nodes.
NeMo's works with:
- Python 3.6 or 3.7
- Pytorch 1.6 or above
We recommend using NVIDIA's PyTorch container version 20.06-py3 with NeMo's main branch.
docker run --gpus all -it --rm -v <nemo_github_folder>:/NeMo --shm-size=8g \
-p 8888:8888 -p 6006:6006 --ulimit memlock=-1 --ulimit \
stack=67108864 nvcr.io/nvidia/pytorch:20.06-py3
Once requirements are satisfied (or you are inside NVIDIA docker container), simply install using pip:
pip install nemo_toolkit[all]==version
pip install nemo_toolkit[all]
- latest released version (currently 0.11.0)
Or if you want the latest (or particular) version from GitHub:
python -m pip install git+https://github.com/NVIDIA/NeMo.git@{BRANCH}#egg=nemo_toolkit[nlp]
- where {BRANCH} should be replaced with the branch you want. This is recommended route if you are testing out the latest WIP version of NeMo../reinstall.sh
- from NeMo's git root. This will install the version from current branch.
<nemo_github_folder>/examples/
folder contains various example scripts. Many of them look very similar and have the same arguments because
we used Facebook's Hydra for configuration.
Here is an example command which trains ASR model (QuartzNet15x5) on LibriSpeech, using 4 GPUs and mixed precision training. (It assumes you are inside the container with NeMo installed)
root@987b39669a7e:/NeMo# python examples/asr/speech_to_text.py --config-name=quartznet_15x5 \
model.train_ds.manifest_filepath=<PATH_TO_DATA>/librispeech-train-all.json \
model.validation_ds.manifest_filepath=<PATH_TO_DATA>/librispeech-dev-other.json \
trainer.gpus=4 trainer.max_epochs=128 model.train_ds.batch_size=64 \
+trainer.precision=16 +trainer.amp_level=O1 \
+model.validation_ds.num_workers=16 +model.train_ds.num_workers=16
#(Optional) Tensorboard:
tensorboard --bind_all --logdir nemo_experiments
The best way to get started with NeMo is to checkout one of our tutorials.
Most NeMo tutorials can be run on Google's Colab.
To run tutorials:
1. Click on Colab link (see table below) 3. Connect to an instance with a GPU (Runtime -> Change runtime type -> select "GPU" for hardware accelerator)
Domain | Title | GitHub URL |
---|---|---|
NeMo | Exploring NeMo Fundamentals | 00_NeMo_Primer.ipynb |
ASR | ASR with NeMo | 01_ASR_with_NeMo.ipynb |
ASR | Speech Commands | 02_Speech_Commands.ipynb |
ASR | Online Noise Augmentation | 05_Online_Noise_Augmentation.ipynb |
NLP | Token Classification (Named Entity Recognition) | Token_Classification_Named_Entity_Recognition.ipynb |
NLP | GLUE Benchmark | GLUE_Benchmark.ipynb |
NLP | Punctuation and Capitialization | Punctuation_and_Capitalization.ipynb |
NLP | Question answering with SQuAD | Question_Answering_Squad.ipynb |
TTS | Speech Synthesis | TTS_inference.ipynb |
We welcome community contributions! Please refer to the CONTRIBUTING.md for the process.
NeMo is under Apache 2.0 license.