A low-cost prototype of an assistive device that uses computer vision models to recognize text from images captured of its user's surroundings and reads the extracted text aloud in real-time.
-
The device was built entirely using open-source hardware and pre-trained models.
-
This repository contains all the code that runs on the device's hardware.
-
The device is capable of working both online and offline, but produces more accurate results and more natural-sounding speech when connected to the internet.