Blind and Low-Vision Interaction

会議の名前
CHI 2026
Bridging the Gap between Automated Intervention and Actual User Experience: A Mixed-Methods Study on Mobile Accessibility Issues for Screen Reader Users
要旨

Millions of people around the world experience blindness or moderate to severe visual disability, who need to rely on screen readers to perceive the content of phone screens. Guidelines and testing tools developed to aid software developers suffer from inconsistency in categorizing accessibility issues and not faithfully representing real user experience. In this paper, we aim to construct a better classification of accessibility issues, integrating feedback from screen reader users to existing computational methods. First, we conduct a systematic literature review, investigating 31 papers that demonstrated automated interventions for mobile accessibility. We juxtapose their computationally addressed issues with real user experience, by observing blind users' interaction on 4 apps across 20 user studies. Synthesizing the two studies, we construct a categorization and guideline for screen reader accessibility issues on mobile, aimed to initiate a more user-aware understanding and subsequent interventions towards accessible mobile app development.

著者
Syed Fatiul Huq
University of California, Irvine, Irvine, California, United States
Ziyao He
University of California, Irvine, Irvine, California, United States
Yirui He
University of California, Irvine, Irvine, California, United States
Sam Malek
University of California Irvine, Irvine, California, United States
Towards LLM-powered Assistive Drone for Blind and Low Vision Users
要旨

Drones have gained traction as a versatile form of assistive robots for Blind and Low Vision (BLV) people. Nonetheless, novel interaction techniques are required to enable BLV people to communicate with drones naturally. In this work, we built an LLM-powered assistive drone for BLV users. We leverage an LLM to translate high-level user goals to step-by-step instructions for the drone and to extract visual information from the images. Through a formative study with BLV users (N=9), we identified envisioned use cases and desired interaction modalities. Then, we took a participatory and iterative approach to build a prototype, incorporating feedback received from 3 BLV users, as well as 5 domain experts. Finally, we conducted a user study with an additional 6 BLV participants to evaluate the iterated prototype, and received positive feedback. This work is contributing to a growing body of research on harnessing the power of LLMs to build a more inclusive world.

著者
Yize Wei
National University of Singapore, Singapore, Singapore
Ibnu Taimiyyah Bin Adam
National University of Singapore, Singapore, Singapore
Hanjun Wu
School of Computing, Singapore, Singapore, Singapore
Moritz Alexander. Messerschmidt
Auckland Bioengineering Institute, The University of Auckland, Auckland, New Zealand
Wei Tsang Ooi
National University of Singapore, Singapore, Singapore
Christophe Jouffrais
CNRS, Singapore, Singapore
Suranga Nanayakkara
School of Computing, National University of Singapore, Singapore, Singapore
TingleTouch: Touch Guidance through Electrical Stimulation in Resistance Training
要旨

In resistance training, trainers employ touch guidance to help trainees control posture and activate muscles. Haptic feedback can extend this support to solitary workouts, but translating the nuances of touch into effective haptic patterns remains challenging. In this paper, we categorize the instructional messages conveyed through trainers' touch guidance and design electrical stimulation patterns to replicate them. A preliminary study with six trainers and six trainees identified six core messages underlying touch guidance. We then designed electrical stimulation patterns for each message and refined them with two sports scientists and a UX designer, ensuring usability and grounding. Finally, sixteen gymgoers evaluated these patterns in a controlled exercise task. Participants reliably distinguished the feedback and used the instructed muscles accordingly, achieving accuracies of 97.14% and 99.22% across two sessions, cross-checked with EMG and pose estimation. These findings demonstrate that the proposed electrical stimulation feedback is intuitive and learnable.

著者
Dong-Uk Kim
Chung-Ang University, Seoul, Korea, Republic of
Hye-Young Jo
University of Colorado Boulder, Boulder, Colorado, United States
Hankyung Kim
KAIST, Daejeon, Korea, Republic of
Ryo Suzuki
University of Colorado Boulder, Boulder, Colorado, United States
Seungwoo Je
Southern University of Science and Technology, Shenzhen, China
Yoonji Kim
Chung-Ang University, Anseong, Korea, Republic of
動画
Are You Comfortable Sharing It?: Leveraging Image Obfuscation Techniques to Enhance Sharing Privacy for Blind and Visually Impaired Users
要旨

People with Blind Visual Impairments (BVI) face unique challenges when sharing images, as these may accidentally contain sensitive or inappropriate content. In many instances, they are unaware of the potential risks associated with sharing such content, which can compromise their privacy and interpersonal relationships. To address this issue, we investigated image filtering techniques that could help BVI users manage sensitive content before sharing with various audiences, including family, friends, or strangers. We conducted a study with 20 BVI participants, evaluating different filters applied to images varying in sensitivity, such as personal moments or embarrassing shots. Results indicated that pixelation was the least preferred method, while preferences for other filters varied depending on image type and sharing context. Additionally, participants reported greater comfort when sharing filtered versus unfiltered images across audiences. Based on the results, we offer a set of design guidelines to enhance the image-sharing experience for BVI individuals.

著者
Satabdi Das
The University of British Columbia, Kelowna, British Columbia, Canada
Nahian Beente Firuj
Shahjalal University of Science and Technology, Sylhet, Bangladesh
Manjot Singh
The University of British Columbia, Kelowna, British Columbia, Canada
Arshad Nasser
University of British Columbia, Kelowna, British Columbia, Canada
Khalad Hasan
University of British Columbia, Kelowna, British Columbia, Canada
A11y-CUA Dataset: Characterizing the Accessibility Gap in Computer Use Agents
要旨

Computer Use Agents (CUAs) operate interfaces by pointing, clicking, and typing - mirroring interactions of sighted users (SUs) who can thus monitor CUAs and share control. CUAs do not reflect interactions by blind and low-vision users (BLVUs) who use assistive technology (AT). BLVUs thus cannot easily collaborate with CUAs. To characterize the accessibility gap of CUAs, we present A11y-CUA, a dataset of BLVUs and SUs performing 60 everyday tasks with 40.4 hours and 158,325 events. Our dataset analysis reveals that our collected interaction traces quantitatively confirm distinct interaction styles between SU and BLVU groups (mouse- vs.keyboard-dominant) and demonstrate interaction diversity within each group (sequential vs. shortcut navigation for BLVUs). We then compare collected traces to state-of-the-art CUAs under default and AT conditions (keyboard-only, magnifier). The default CUA executed 78.3% of tasks successfully. But with the AT conditions, CUA’s performance dropped to 41.67% and 28.3% with keyboard-only and magnifier conditions respectively, and did not reflect nuances of real AT use. With our open A11y-CUA dataset, we aim to promote collaborative and accessible CUAs for everyone.

著者
Ananya Gubbi Mohanbabu
The University of Texas at Austin, Austin, Texas, United States
Rosiana Natalie
University of Michigan, Ann Arbor, Michigan, United States
Brandon Kim
University of Michigan , Ann Arbor, Michigan, United States
Anhong Guo
University of Michigan, Ann Arbor, Michigan, United States
Amy Pavel
University of California, Berkeley, Berkeley, California, United States
動画
Bridging Visual Asymmetry: Exploring AI-Mediated Communication Support for Parents with Visual Impairments and Their Sighted Children in Outdoor Informal Learning
要旨

Parents with visual impairments (PVI) encounter unique challenges in engaging their sighted children in outdoor informal learning, yet little is known about how to address these barriers. We first conducted interviews with 11 mixed-ability families to uncover the perceptual, knowledge, and interactional challenges that limit parental participation. Building on these findings, we explored design opportunities across device form factors, interaction mechanisms, and modes of AI mediation through a design consultation, which led to the development of Bond, a distributed Wizard-of-Oz prototype. Bond combines a lightweight child-worn camera with parent-facing prompts to deliver timely, context-aware support for joint attention and conversation. A field study with 14 families demonstrated increased parental responsiveness, deepened parent–child dialogue, and strengthened parents’ confidence, while fostering children’s curiosity and co-exploration. We propose a Symbiotic Learning Paradigm that reframes AI as a relational mediator bridging perceptual asymmetry, offering design considerations for inclusive, co-constructed learning in mixed-ability families.

著者
Yutong Jiang
Tongji University, Shanghai, China
Zixuan Zhang
Tongji University, Shanghai, China
Jiaying Xu
Tongji University, Shanghai, China
Qingyun Zheng
Tongji University, Shanghai, China
Qian Guo
Tongji University, Shanghai, China
Qinyang Wang
Tongji University, Shanghai, China
Qi Wang
Tongji University, Shanghai, China
Guanhong Liu
Tongji University, Shanghai, China
Idea11y: Enhancing Accessibility in Collaborative Ideation for Blind or Low Vision Screen Reader Users
要旨

Collaborative ideation tools like digital whiteboards are widely used by designers, academics, and creative practitioners; yet most ideation tools are inaccessible to blind or low vision (BLV) users. Informed by prior work on whiteboarding challenges encountered by BLV users and our formative study with eight sighted whiteboard users, we built Idea11y, a whiteboard plug-in that provides a hierarchical, editable text outline of board content, augmented with audio cues and voice coding. Findings from evaluation with thirteen BLV screen reader users revealed how Idea11y supported BLV users' understanding of clustering structure and streamlined their process to author, synthesize, and prioritize ideas on the board. Collaborative ideation sessions with six BLV-sighted dyads demonstrated how BLV users used Idea11y to develop collaboration awareness and coordinate actions with sighted collaborators. Drawing on this, we discuss ways to move beyond implicit visual norms in established ideation frameworks and practical considerations for future accessible ideation systems.

著者
Mingyi Li
Northeastern University, Boston, Massachusetts, United States
Huiru Yang
Northeastern University, Boston, Massachusetts, United States
Nihar Sanda
Northeastern University, Boston, Massachusetts, United States
Maitraye Das
Northeastern University, Boston, Massachusetts, United States
動画