Take My Hand: Automated Hand-Based Spatial Guidance for the Visually Impaired

要旨

Tasks that involve locating objects and then moving hands to those specific locations, such as using touchscreens or grabbing objects on a desk, are challenging for the visually impaired. Over the years, audio guidance and haptic feedback have been a staple in hand navigation based assistive technologies. However, these methods require the user to interpret the generated directional cues and then manually perform the hand motions. In this paper, we present automated hand-based spatial guidance to bridge the gap between guidance and execution, allowing visually impaired users to move their hands between two points automatically, without any manual effort. We implement this concept through FingerRover, an on-finger miniature robot that carries the user's finger to target points. We demonstrate the potential applications that can benefit from automated hand-based spatial guidance. Our user study shows the potential of our technique in improving the interaction capabilities of people with visual impairments.

受賞
Best Paper
著者
Adil Rahman
University of Virginia, Charlottesville, Virginia, United States
Md Aashikur Rahman Azim
University of Virginia, Charlottesville, Virginia, United States
Seongkook Heo
University of Virginia, Charlottesville, Virginia, United States
論文URL

https://doi.org/10.1145/3544548.3581415

動画

会議: CHI 2023

The ACM CHI Conference on Human Factors in Computing Systems (https://chi2023.acm.org/)

セッション: Navigating Spaces and Places

Hall F
6 件の発表
2023-04-26 01:35:00
2023-04-26 03:00:00