この勉強会は終了しました。ご参加ありがとうございました。
Blind and Visually Impaired (BVI) people find challenges in navigating unfamiliar environments, even using assistive tools such as white canes or smart devices. Increasingly affordable quadruped robots offer us opportunities to design autonomous guides that could improve how BVI people find ways around unfamiliar environments and maneuver therein. In this work, we designed RDog, a quadruped robot guiding system that supports BVI individuals' navigation and obstacle avoidance in indoor and outdoor environments. RDog combines an advanced mapping and navigation system to guide users with force feedback and preemptive voice feedback. Using this robot as an evaluation apparatus, we conducted experiments to investigate the difference in BVI people's ambulatory behaviors using a white cane, a smart cane, and RDog. Results illustrated the benefits of RDog-based ambulation, including faster and smoother navigation with fewer collisions and limitations, and reduced cognitive load. We discuss the implications of our work for multi-terrain assistive guidance systems.
Diagrams often appear as node-link representations in contexts such as taxonomies, mind maps and networks in textbooks. Despite their pervasiveness, they present accessibility challenges for blind and low-vision people. To address this challenge, we introduce Touch-and-Audio-based Diagram Access (TADA), a tablet-based interactive system that makes diagram exploration accessible through musical tones and speech. We designed TADA informed by an interview study with 15 participants who shared their challenges and strategies with diagrams. TADA enables people to access a diagram by: i) engaging in open-ended touch-based explorations, ii) searching for nodes, iii) navigating between nodes and iv) filtering information. We evaluated TADA with 25 participants and found it useful for gaining different perspectives on diagrammatic information.
We present Umwelt, an authoring environment for interactive multimodal data representations. In contrast to prior approaches, which center the visual modality, Umwelt treats visualization, sonification, and textual description as coequal representations: they are all derived from a shared abstract data model, such that no modality is prioritized over the others. To simplify specification, Umwelt evaluates a set of heuristics to generate default multimodal representations that express a dataset's functional relationships. To support smoothly moving between representations, Umwelt maintains a shared query predicated that is reified across all modalities — for instance, navigating the textual description also highlights the visualization and filters the sonification. In a study with 5 blind / low-vision expert users, we found that Umwelt's multimodal representations afforded complementary overview and detailed perspectives on a dataset, allowing participants to fluidly shift between task- and representation-oriented ways of thinking.
In recent years, there has been a growing interest in enhancing the accessibility of visualizations for people with visual impairments. While much of the research has focused on improving accessibility for screen reader users, the specific needs of people with remaining vision (i.e., low-vision individuals) have been largely unaddressed. To bridge this gap, we conducted a qualitative study that provides insights into how low-vision individuals experience visualizations. We found that participants utilized various strategies to examine visualizations using the screen magnifiers and also observed that the default zoom level participants use for general purposes may not be optimal for reading visualizations. We identified that participants relied on their prior knowledge and memory to minimize the traversing cost when examining visualization. Based on the findings, we motivate a personalized tool to accommodate varying visual conditions of low-vision individuals and derive the design goals and features of the tool.
Customization is crucial for making visualizations accessible to blind and low-vision (BLV) people with widely-varying needs. But what makes for usable or useful customization? We identify four design goals for how BLV people should be able to customize screen-reader-accessible visualizations: presence, or what content is included; verbosity, or how concisely content is presented; ordering, or how content is sequenced; and, duration, or how long customizations are active. To meet these goals, we model a customization as a sequence of content tokens, each with a set of adjustable properties. We instantiate our model by extending Olli, an open-source accessible visualization toolkit, with a settings menu and command box for persistent and ephemeral customization respectively. Through a study with 13 BLV participants, we find that customization increases the ease of identifying and remembering information. However, customization also introduces additional complexity, making it more helpful for users familiar with similar tools.