An efficient path to communicate with Sign Language for people needed Powered by machine intelligence.
Our approach leverages n-gram modeling and Knowledge Distillation techniques to enhance performance and accuracy. The system integrates these methodologies to capture the temporal depen- dencies between gestures and reduce the computational burden typically associated with deep learning models. We utilize variants of EfficientNetv2 and MobileNetV3 architectures, pre-trained on the ImageNet dataset, to achieve a balance between efficiency and accuracy.
- Real-time Translation: Converts sign language gestures into text or spoken words in real-time.
- Fast Speed: With the usage of EfficientNet and Knowledge Distillation, our system can be faster than traditional recognition systems with CNNs.
- High Accuracy: Advanced machine learning algorithms ensure high accuracy in gesture recognition.
- American language Support: Focus on American Sign Languages of Alphabets like, A, B, C, etc.
To set up GestureSpeak locally, follow these steps:
- Clone the repository:
git clone https://github.com/YapWH/Understand-What-You-See.git cd GestureSpeak
- Validation:
python real-time.py
This project is licensed under the MIT License. See the LICENSE file for more details.