Skip to content

Latest commit

 

History

History
23 lines (21 loc) · 1.57 KB

README.md

File metadata and controls

23 lines (21 loc) · 1.57 KB

djangoSpeech

Abstract

Speech is the new essential fuel for human-computer interaction. With the current trend of modern-day human-computer interaction and its increasing reliance on voice commands, the development of a robust speech framework is paramount. The demand for intuitive interfaces capable of comprehending and responding to natural language commands has considerably increased. When interacting across various domains, lack of synchronization between different user inputs can cause confusion and dissatisfaction, which can lead to disjointed user experience and lost productivity. This project presents a precisely built "Robust Speech-GUI Integration Framework for Frontend Audio Detection and Tracking". This framework facilitates the conversion of speech commands to text and precisely aligns them with the corresponding Graphical User Interface (GUI) events. The framework stands out as a sophisticated solution with a plug-and-play architecture due to its integration of multiple methodologies for each of its submodules. The initial module for speech command onset detection features three variations: push-to-talk, predefined, and customized wake word. Subsequently, the Silero VAD off-the-shelf model is utilized to ascertain the endpoint of speech. Following this, the WhisperX module provides precise word-level transcription, meticulously timestamped to align with concurrently captured GUI events. The system demonstrates robustness, offering full functionality either on-device or partially in the cloud via socket communication.

Copyrighted by Beulah Karrolla 2024