Tasks that involve locating objects and then moving hands to those specific locations, such as using touchscreens or grabbing objects on a desk, are challenging for the visually impaired. Over the years, audio guidance and haptic feedback have been a staple in hand navigation based assistive technologies. However, these methods require the user to interpret the generated directional cues and then manually perform the hand motions. In this paper, we present automated hand-based spatial guidance to bridge the gap between guidance and execution, allowing visually impaired users to move their hands between two points automatically, without any manual effort. We implement this concept through FingerRover, an on-finger miniature robot that carries the user's finger to target points. We demonstrate the potential applications that can benefit from automated hand-based spatial guidance. Our user study shows the potential of our technique in improving the interaction capabilities of people with visual impairments.
Visual icons provide immediate recognition of features on print maps but do not translate well for touch reading by people who are blind or have low vision due to the low fidelity of tactile perception. We explored 3D printed icons as an equivalent to visual icons for tactile maps addressing these problems. We designed over 200 tactile icons (TactIcons) for street and park maps. These were touch tested by blind and sighted people, resulting in a corpus of 33 icons that can be recognised instantly and a further 34 icons that are easily learned. Importantly, this work has informed the creation of detailed guidelines for the design of TactIcons and a practical methodology for touch testing new TactIcons.
It is hoped that this work will contribute to the creation of more inclusive, user-friendly tactile maps for people who are blind or have low vision.
Enabling blind visitors to explore museum floors while feeling the facility's atmosphere and increasing their autonomy and enjoyment are imperative for giving them a high-quality museum experience. We designed a science museum exploration system for blind visitors using an autonomous navigation robot. Blind users can control the robot to navigate them toward desired exhibits while playing short audio descriptions along the route. They can also browse detailed explanations on their smartphones and call museum staff if interactive support is needed. Our real-world user study at a science museum during its opening hour revealed that blind participants could explore the museum safely and independently at their own pace. The study also showed that the sighted visitors who saw the participants walking with the robot accepted the assistive robot well. We finally conducted focus group sessions with the blind participants and discussed further requirements toward a more independent museum experience.
The understanding of multi-level spatial topologies is a difficult and frequent challenge in people with visual impairments daily life, impacting their independent mobility. Using the tools of the “maker” movement, and following an iterative co-design process
with Orientation and Mobility instructors, we created an innovative tool (3D printed interactive model of a train station) for teaching complex spatial knowledge. Then, we did a comparative study with end users between the 3D interactive model that we designed and two 2D interactive tactile maps representing the same location. Our results show that the 3D interactive model is useful and usable, provides better satisfaction and is preferred to 2D tactile maps.
In addition, complex spatial notions are better understood with the 3D model. Altogether, these results suggest that the “maker movement” may empower special education teachers with adapted and innovative tools.
Independent travel and navigation in new environments, in particular multi-storey buildings, is a major challenge for people who are blind or have low vision (BLV).
Using tactile maps as part of orientation and mobility (O&M) training, BLV people can build a cognitive map of an environment before visiting. Tactile maps of multi-level environments, however, have received little attention.
We investigated the usefulness of 3D printed models of buildings, through a user study with nine BLV adults. Three designs were evaluated: flat, overlapped-sliding and overlapped-rotating.
All three designs were reported to be useful, usable, engaging and allowed participants to build a cognitive map of the building. There was a strong user preference for the overlapped presentations, which were reported to be more effective in supporting cross-floor spatial knowledge. This exploration of the design space of 3D building plans demonstrates their value and we hope will encourage their provision in O&M training.
Guiding robots, in the form of canes or cars, have recently been explored to assist blind and low vision (BLV) people. Such robots can provide full or partial autonomy when guiding. However, the pros and cons of different forms and autonomy for guiding robots remain unknown. We sought to fill this gap. We designed autonomy-switchable guiding robotic cane and car. We conducted a controlled lab-study (N=12) and a field study (N=9) on BLV. Results showed that full autonomy received better walking performance and subjective ratings in the controlled study, whereas participants used more partial autonomy in the natural environment as demanding more control. Besides, the car robot has demonstrated abilities to provide a higher sense of safety and navigation efficiency compared with the cane robot. Our findings offered empirical evidence about how the BLV community perceived different machine forms and autonomy, which can inform the design of assistive robots.