Assistive Interactions: Navigation and Visualisation for Users Who are Blind or Low Vision

会議の名前
CHI 2024
Navigating Real-World Challenges: A Quadruped Robot Guiding System for Visually Impaired People in Diverse Environments
要旨

Blind and Visually Impaired (BVI) people find challenges in navigating unfamiliar environments, even using assistive tools such as white canes or smart devices. Increasingly affordable quadruped robots offer us opportunities to design autonomous guides that could improve how BVI people find ways around unfamiliar environments and maneuver therein. In this work, we designed RDog, a quadruped robot guiding system that supports BVI individuals' navigation and obstacle avoidance in indoor and outdoor environments. RDog combines an advanced mapping and navigation system to guide users with force feedback and preemptive voice feedback. Using this robot as an evaluation apparatus, we conducted experiments to investigate the difference in BVI people's ambulatory behaviors using a white cane, a smart cane, and RDog. Results illustrated the benefits of RDog-based ambulation, including faster and smoother navigation with fewer collisions and limitations, and reduced cognitive load. We discuss the implications of our work for multi-terrain assistive guidance systems.

受賞
Honorable Mention
著者
SHAOJUN CAI
National University of Singapore, Singapore, Singapore, Singapore
Ashwin Ram
National University of Singapore, Singapore, Singapore
Zhengtai Gou
Tsinghua University, Beijing, China
Mohd Alqama Wasim. Shaikh
Veermata Jijabai Technological Institute, Matunga, Dadar, Maharashtra, India
Yu-An Chen
Yale-NUS College, Singapore, Singapore, Singapore
Yingjia Wan
Institute of Psychology, Chinese Academy of Sciences, Beijing, China
Kotaro Hara
Singapore Management University, Singapore, Singapore
Shengdong Zhao
National University of Singapore, Singapore, Singapore
David Hsu
National University of Singapore, Singapore, Singapore
論文URL

https://doi.org/10.1145/3613904.3642227

動画
TADA: Making Node-link Diagrams Accessible to Blind and Low-Vision People
要旨

Diagrams often appear as node-link representations in contexts such as taxonomies, mind maps and networks in textbooks. Despite their pervasiveness, they present accessibility challenges for blind and low-vision people. To address this challenge, we introduce Touch-and-Audio-based Diagram Access (TADA), a tablet-based interactive system that makes diagram exploration accessible through musical tones and speech. We designed TADA informed by an interview study with 15 participants who shared their challenges and strategies with diagrams. TADA enables people to access a diagram by: i) engaging in open-ended touch-based explorations, ii) searching for nodes, iii) navigating between nodes and iv) filtering information. We evaluated TADA with 25 participants and found it useful for gaining different perspectives on diagrammatic information.

受賞
Honorable Mention
著者
Yichun Zhao
University of Victoria, Victoria, British Columbia, Canada
Miguel A. Nacenta
University of Victoria, Victoria, British Columbia, Canada
Mahadeo A.. Sukhai
Canadian National Institute for the Blind, Kingston, Ontario, Canada
Sowmya Somanath
University of Victoria, Victoria, British Columbia, Canada
論文URL

https://doi.org/10.1145/3613904.3642222

動画
Umwelt: Accessible Structured Editing of Multi-Modal Data Representations
要旨

We present Umwelt, an authoring environment for interactive multimodal data representations. In contrast to prior approaches, which center the visual modality, Umwelt treats visualization, sonification, and textual description as coequal representations: they are all derived from a shared abstract data model, such that no modality is prioritized over the others. To simplify specification, Umwelt evaluates a set of heuristics to generate default multimodal representations that express a dataset's functional relationships. To support smoothly moving between representations, Umwelt maintains a shared query predicated that is reified across all modalities — for instance, navigating the textual description also highlights the visualization and filters the sonification. In a study with 5 blind / low-vision expert users, we found that Umwelt's multimodal representations afforded complementary overview and detailed perspectives on a dataset, allowing participants to fluidly shift between task- and representation-oriented ways of thinking.

著者
Jonathan Zong
Massachusetts Institute of Technology, Cambridge, Massachusetts, United States
Isabella Pedraza Pineros
M.I.T., Cambridge, Massachusetts, United States
Mengzhu (Katie) Chen
Massachusetts Institute of Technology, Cambridge, Massachusetts, United States
Daniel Hajas
University College London, London, United Kingdom
Arvind Satyanarayan
MIT, Cambridge, Massachusetts, United States
論文URL

https://doi.org/10.1145/3613904.3641996

動画
How Do Low-Vision Individuals Experience Information Visualization?
要旨

In recent years, there has been a growing interest in enhancing the accessibility of visualizations for people with visual impairments. While much of the research has focused on improving accessibility for screen reader users, the specific needs of people with remaining vision (i.e., low-vision individuals) have been largely unaddressed. To bridge this gap, we conducted a qualitative study that provides insights into how low-vision individuals experience visualizations. We found that participants utilized various strategies to examine visualizations using the screen magnifiers and also observed that the default zoom level participants use for general purposes may not be optimal for reading visualizations. We identified that participants relied on their prior knowledge and memory to minimize the traversing cost when examining visualization. Based on the findings, we motivate a personalized tool to accommodate varying visual conditions of low-vision individuals and derive the design goals and features of the tool.

著者
Yanan Wang
University of Wisconsin-Madison, Madison, Wisconsin, United States
Yuhang Zhao
University of Wisconsin-Madison, Madison, Wisconsin, United States
Yea-Seul Kim
University of Wisconsin-Madison, Madison, Wisconsin, United States
論文URL

https://doi.org/10.1145/3613904.3642188

動画
“Customization is Key”: Reconfigurable Textual Tokens for Accessible Data Visualizations
要旨

Customization is crucial for making visualizations accessible to blind and low-vision (BLV) people with widely-varying needs. But what makes for usable or useful customization? We identify four design goals for how BLV people should be able to customize screen-reader-accessible visualizations: presence, or what content is included; verbosity, or how concisely content is presented; ordering, or how content is sequenced; and, duration, or how long customizations are active. To meet these goals, we model a customization as a sequence of content tokens, each with a set of adjustable properties. We instantiate our model by extending Olli, an open-source accessible visualization toolkit, with a settings menu and command box for persistent and ephemeral customization respectively. Through a study with 13 BLV participants, we find that customization increases the ease of identifying and remembering information. However, customization also introduces additional complexity, making it more helpful for users familiar with similar tools.

著者
Shuli Jones
Massachusetts Institute of Technology, Cambridge, Massachusetts, United States
Isabella Pedraza Pineros
M.I.T., Cambridge, Massachusetts, United States
Daniel Hajas
University College London, London, United Kingdom
Jonathan Zong
Massachusetts Institute of Technology, Cambridge, Massachusetts, United States
Arvind Satyanarayan
MIT, Cambridge, Massachusetts, United States
論文URL

https://doi.org/10.1145/3613904.3641970

動画