Skip to content

Latest commit

 

History

History
24 lines (14 loc) · 1.77 KB

README.md

File metadata and controls

24 lines (14 loc) · 1.77 KB

vzon

V-ZON is a technological framework that encompasses within it the power to autonomously navigate in 3D space, perform robust object detection in real-time, and help detect obstructions and obstacles along the way, all while allowing speech powered I/O.

Motivation

It is quite surprising how often us humans , take the most basic sensory feelings for granted. The fact that we can see each other right now, read each others’ expressions, enjoy the wonderful sunsets, and experience the beauty of this world through our eyes is overlooked. But what about the 248 million visually impaired individuals in the world. For whom, performing the most basic daily activities is nothing less than a hassle. We at V-ZON believe that new frontiers in tech and innovation hold the answers to the ever-increasing problem of inclusion in society. Accessibility has now become something that is often talked about but seldom worked upon. We wanted to change that, and that’s how V-ZON was born, the framework that seamlessly integrates future technologies with the present social sustainability and accessibility goals of the 21st century.

Dependencies

  ```
  pip install <Name of module>
  ```

Should work for almost all dependencies in thie project. If not , a quick search on Pypi will yield reasonable results.

How it works

V-ZON is a tech framework that houses the technologies required for building solutions involving first-person autonomous localization and mapping. -The main implementation of V-ZON is to improve accessibility for the visually impaired. -This framework, can also be used to integrate real-world exploration with the METAverse safely and concisely, using edge and object detection