Team Members

  • Kareem Emad El-Din [kareem1608590@miuegypt.edu.eg][CV]
  • Shehab Mohsen [shehab1603611@miuegypt.edu.eg][CV]
  • Sherif Akram [sherif1402164@miuegypt.edu.eg][CV]
  • Nouran Khaled [nouran1601182@miuegypt.edu.eg][CV]

Supervisors

  • Dr. Ammar Mohamed [Ammar.Ammar@miuegypt.edu.eg]
  • Eng. Haytham Metawie [haytham.metawie@miuegypt.edu.eg]

Project Description

This project aims to help those of us who are visually impaired live without assistance. We propose a system that facilitates indoor navigation  and item allocation that the person might need as well as reduce the hazards they might experience living alone. The person will use the camera’s phone to provide a video stream of the surroundings, and will give voice commands to choose the feature needed and another command to find the item he needs, the video stream will be processed and objects will be detected using image processing  techniques and their approximate distances are obtained, footage of the surroundings will be utilized either to provide audio directions to an object, add a new object to the set of identifiable objects, or allow user to freely roam the surroundings warning him of any obstacles or hazards, this project also detects abnormal or sudden movements in order to alert the user’s emergency contact if the user is in danger.

GitHub Repository

1.Proposal

2.Software Requirements Specifications

3.Software Design Description

4.Thesis

Published Paper

  • Conference: 2nd International Conference on Electrical, Communication and Computer Engineering (ICECCE), June 2020, Istanbul, Turkey.

Abstract

Visually impaired people struggle to live without assistance or face any aspect of life alone especially with people that cannot afford extra assistance equipment. Usually impaired people receive assistance by either human or wearable devices. The first one bears the burden on the human, while the second adds financial burdens nevertheless the hassle of identifying an object is not decreased.
Smartphones are almost accessible to everyone and equipped with accessibility features including sensors that can be utilized to help both visually impaired and sighted people. Thus, this paper proposes an approach using Convolutional Neural Network (CNN), speech recognition and smartphone camera calibration aiming at facilitating the process of indoor guidance for visually impaired people. A smartphone’s camera acts as the user’s eyes. A pre-trained CNN model is used for object detection and the distance to objects is calculated to guide the user toward the right directions and to warn them of obstacles. The speech recognition part is used as a communication channel between visually impaired people and the smartphone. Also, the proposed approach supports object personalising that helps to distinguish user’s item from other items found in the room. To evaluate the personalized objected detection, a customized dataset is created for two objects. The experimental results indicate that the accuracy is 92% and 87% for both objects respectively. Also, we experiment the detect distance of two objects against their real distances. The results achieve 0.05 and 0.08 error ratio.

Paper Draft

Demo

User Feedback