Enhancing perception for the visually impaired with deep learning techniques and low-cost wearable sensors Article uri icon

Abstracto

  • AUTHORS

    • Bauer, Z.,
    • Dominguez, A.,
    • Gomez-Donoso, F.,
    • Orts-Escolano, S.,
    • Cazorla, M. 


    ABSTRACT
    As estimated by the World Health Organization, there are millions of people who lives with some form of vision impairment. As a consequence, some of them present mobility problems in outdoor environments. With the aim of helping them, we propose in this work a system which is capable of delivering the position of potential obstacles in outdoor scenarios. Our approach is based on non-intrusive wearable devices and focuses also on being low-cost. First, a depth map of the scene is estimated from a color image, which provides 3D information of the environment. Then, an urban object detector is in charge of detecting the semantics of the objects in the scene. Finally, the three-dimensional and semantic data is summarized in a simpler representation of the potential obstacles the users have in front of them. This information is transmitted to the user through spoken or haptic feedback. Our system is able to run at about 3.8 fps and achieved a 87.99% mean accuracy in obstacle presence detection. Finally, we deployed our system in a pilot test which involved an actual person with vision impairment, who validated the effectiveness of our proposal for improving its navigation capabilities in outdoors. © 2019 Elsevier B.V.

fecha de publicación

  • 2020

Palabras clave

    • Deep learning
    • Depth from monocular frames
    • Outdoors
    • Visual impaired assistant

Página inicial

  • 27

Última página

  • 36

Volumen

  • 137