Language barriers are very much still a real thing.
We can take baby steps to help close that. Speech to text and translators have made it a heap easier. But what about for those that maybe don't speak or can't hear?
What about them?
Well...you can begin to use Tensorflow Object Detection and Python to help close that gap. I'd worked to build an end-to-end custom object detection model that allows you to translate sign language in real time.
In this Project I'd done the following steps:
- Collect images for deep learning using your webcam and OpenCV
- Label images for sign language detection using LabelImg
- Setup Tensorflow Object Detection pipeline configuration
- Use transfer learning to train a deep learning model
- Detect sign language in real time using OpenCV
LabelImg: https://github.com/tzutalin/labelImg
Installing the Tensorflow Object Detection API: https://tensorflow-object-detection-api-tutorial.readthedocs.io/en/latest/install.html