3D printed models (3DPMs) are increasingly used to support the education of students who are blind or have low vision (BLV). As 3DPMs are more widely-adopted, educators are using more complex multi-part models. However, with this increased complexity comes additional challenges for their use, such as supporting audio labels of multiple parts as well as guiding the assembly and disassembly of the model. This work explores the co-design and evaluation of a system that supports the use of multi-part 3DPMs by BLV students. Working with BLV adults and children, as well as educators, an iPad application was developed to support interaction with an insect model, including speech interaction and support for assembly. Evaluation showed that the system was strongly enjoyed by students and educators were enthusiastic as they believed it would increase classroom engagement and inclusion, and its support for voice annotation could be used for assessment.
While wearable haptics hold promise for making non-verbal cues like gestures and facial expressions accessible to blind or low-vision musicians, our understanding of how vibration signals can be interpreted and applied in real-world learning environments remains limited. We invited five music teachers and their seven students to participate in a ten-week longitudinal study involving observations, weekly catch-ups, group discussions, and interviews. We explored how wearable haptics could facilitate communication between sighted teachers and BLV students during one-on-one music lessons. We found that students and teachers derived particular meanings from vibration signals, including time-coded meaning, mutually agreed and intuitive meaning, and haptic metaphors. Additionally, wearable haptics significantly improved the experience of learning music for both sighted teachers and BLV students. We conclude by highlighting key design implications and outlining future research directions to create wearable haptics that significantly improve the music learning experience of BLV people.
Assistive technologies (ATs) have the potential to empower blind and low vision (BLV) people. Yet, they often remain underutilised due to their immobility and limited applicability across scenarios. This paper presents LifeInsight, an AI-powered assistive wearable for BLV people that uses a wearable camera, microphone and single-click interface for goal-oriented visual querying. To inform the design of LifeInsight, we first collected a corpus of BLV people’s daily experiences using video probes and interviews. Ten BLV people recorded their daily experiences over one week using GoPro cameras, providing empirical insights. Based on these, we report on LifeInsight and its evaluation with 13 BLV people across six scenarios. LifeInsight effectively responded to visual queries, such as distinguishing between jars or identifying the status of a candle. Drawing on our work, we conclude with key lessons and practical recommendations to guide future research and advance the development and evaluation of AI-powered assistive wearables.
By overlaying time-synced user comments on videos, Danmu creates a co-watching experience for online viewers. However, its visual-centric design poses significant challenges for blind and low vision (BLV) viewers. Our formative study identified three primary challenges that hinder BLV viewers' engagement with Danmu: the lack of visual context, the speech interference between comments and videos, and the disorganization of comments. To address these challenges, we present DanmuA11y, a system that makes Danmu accessible by transforming it into multi-viewer audio discussions. DanmuA11y incorporates three core features: (1) Augmenting Danmu with visual context, (2) Seamlessly integrating Danmu into videos, and (3) Presenting Danmu via multi-viewer discussions. Evaluation with twelve BLV viewers demonstrated that DanmuA11y significantly improved Danmu comprehension, provided smooth viewing experiences, and fostered social connections among viewers. We further highlight implications for enhancing commentary accessibility in video-based social media and live-streaming platforms.
Landmarks are critical in navigation, supporting self-orientation and mental model development. Similar to sighted people, people with low vision (PLV) frequently look for landmarks via visual cues but face difficulties identifying some important landmarks due to vision loss. We first conducted a formative study with six PLV to characterize their challenges and strategies in landmark selection, identifying their unique landmark categories (e.g., area silhouettes, accessibility-related objects) and preferred landmark augmentations. We then designed VisiMark, an AR interface that supports landmark perception for PLV by providing both overviews of space structures and in-situ landmark augmentations. We evaluated VisiMark with 16 PLV and found that VisiMark enabled PLV to perceive landmarks they preferred but could not easily perceive before, and changed PLV's landmark selection from only visually-salient objects to cognitive landmarks that are more important and meaningful. We further derive design considerations for AR-based landmark augmentation systems for PLV.
The widespread use of image tables presents significant accessibility challenges for blind and low vision (BLV) people, limiting their access to critical data. Despite advancements in artificial intelligence (AI) for interpreting image tables, current solutions often fail to consider the specific needs of BLV users, leading to a poor user experience. To address these issues, we introduce TableNarrator, an innovative system designed to enhance the accessibility of image tables. Informed by accessibility standards and user feedback, TableNarrator leverages AI to generate alternative text tailored to the cognitive and reading preferences of BLV users. It streamlines access through a simple interaction mode and offers personalized options. Our evaluations, from both technical and user perspectives, demonstrate that TableNarrator not only provides accurate and comprehensive table information but also significantly enhances the user experience for BLV people.