Deployment And Layout Of Deep Learning-Based Smart Eyewear Applications Platform For Vision Disabled Individuals

Main Article Content

Himanshu Kaushik , Manpreet Bajwa , Anupam Kumar Sharma , Prashant Vats , Shipra Varshney , Sandeep Singh , Siddhartha Sankar Biswas

Abstract

This study suggests a mechanism to help people with visual impairments navigate and move about. Nowadays, multiple services are possible at the same moment. Some of these will be discussed later in this essay. Nevertheless, no dependable and cost-effective alternative has yet been presented to replace the existing technology used by the visual impairments in their everyday tasks. This report begins with examining the problem, followed by an examination of the original problem resolution. Following that, it looked at some of the newest advancements and research in the field of helpful technologies. Finally, it suggests developing and implementing a system to assist the physically handicapped. A Raspberry microprocessor, cameras, batteries, eyeglasses, headphone, backup charger, and connections are all included in the suggested gadget. Use the camera to capture subjects. The phone's RCNN technique and deep learning modules will be used to finish computer vision and analysis. The ultimate output, on the other hand, is sent from the headgear to the vision impairer’s hearing. The study offers methods as well as answers to the aforementioned issues. The research work may be applied to real-world scenarios and is suitable for people with visual impairments. For picture data processing, the gadget presented in this project employs an area accumulating neural net and a Microcontrollers. The process utilizes the Tesseract module from the Python language to do OCR (optical character recognition) and offers an output to the user. Later in this section, the prescribed tactics and effects are outlined.

Article Details

Section
Articles