Vision Accessibility

会議の名前
CHI 2025
"What Would I Want to Make? Probably Everything": Practices and Speculations of Blind and Low Vision Tactile Graphics Creators
要旨

Tactile graphics communicate images and spatial information to blind and low vision (BLV) audiences via touch. However, designing and producing tactile graphics is laborious and often inaccessible to BLV people themselves. We interviewed 14 BLV adults with experience both using and creating tactile graphics to understand their current and desired practices. We found that tactile graphics are intensely valued by many, but that access to and fluency with tactile graphics are compounding challenges. To produce tactile graphics, BLV makers constantly navigate tradeoffs between accessible, low-fidelity craft materials and less accessible, high-fidelity equipment. Going forward, we argue that tactile graphics design and production should be made widely accessible and that tactile graphics themselves should be designed to be expressive and ubiquitous. Drawing from these design goals, we propose specific future tools with features for inclusive designing, sharing, and (re)production of tactile graphics.

著者
Gina Clepper
University of Washington, Seattle, Washington, United States
Emma J. McDonnell
University of Washington, Seattle, Washington, United States
Leah Findlater
University of Washington, Seattle, Washington, United States
Nadya Peek
University of Washington, Seattle, Washington, United States
DOI

10.1145/3706598.3714173

論文URL

https://dl.acm.org/doi/10.1145/3706598.3714173

動画
How Users Who are Blind or Low Vision Play Mobile Games: Perceptions, Challenges, and Strategies
要旨

As blind and low-vision (BLV) players engage more deeply with games, accessibility features have become essential. While some research has explored tools and strategies to enhance game accessibility, the specific experiences of these players with mobile games remain underexamined. This study addresses this gap by investigating how BLV users experience mobile games with varying accessibility levels. Through interviews with 32 experienced BLV mobile players, we explore their perceptions, challenges, and strategies for engaging with mobile games. Our findings reveal that BLV players turn to mobile games to alleviate boredom, achieve a sense of accomplishment, and build social connections, but face barriers depending on the game's accessibility level. We also compare mobile games to other forms of gaming, highlighting the relative advantages of mobile games, such as the inherent accessibility of smartphones. This study contributes to understanding BLV mobile gaming experiences and provides insights for enhancing accessible mobile game design.

著者
Zihe Ran
Communication university of China, Beijing, Beijing, China
Xiyu Li
Communication University of China, Beijing, Beijing, China
Qing Xiao
Carnegie Mellon University, Pittsburgh, Pennsylvania, United States
Xianzhe Fan
Tsinghua University, Beijing, China
Franklin Mingzhe Li
Carnegie Mellon University, Pittsburgh, Pennsylvania, United States
Yanyun Wang
University of Colorado Boulder, Boulder, Colorado, United States
Zhicong Lu
City University of Hong Kong, Hong Kong, China
DOI

10.1145/3706598.3714205

論文URL

https://dl.acm.org/doi/10.1145/3706598.3714205

動画
The Impact of Generative AI Coding Assistants on Developers Who Are Visually Impaired
要旨

The rapid adoption of generative AI in software development has impacted the industry, yet its effects on developers with visual impairments remain largely unexplored. To address this gap, we used an Activity Theory framework to examine how developers with visual impairments interact with AI coding assistants. For this purpose, we conducted a study where developers who are visually impaired completed a series of programming tasks using a generative AI coding assistant. We uncovered that, while participants found the AI assistant beneficial and reported significant advantages, they also highlighted accessibility challenges. Specifically, the AI coding assistant often exacerbated existing accessibility barriers and introduced new challenges. For example, it overwhelmed users with an excessive number of suggestions, leading developers who are visually impaired to express a desire for "AI timeouts.'' Additionally, the generative AI coding assistant made it more difficult for developers to switch contexts between the AI-generated content and their own code. Despite these challenges, participants were optimistic about the potential of AI coding assistants to transform the coding experience for developers with visual impairments. Our findings emphasize the need to apply activity-centered design principles to generative AI assistants, ensuring they better align with user behaviors and address specific accessibility needs. This approach can enable the assistants to provide more intuitive, inclusive, and effective experiences, while also contributing to the broader goal of enhancing accessibility in software development.

著者
Claudia Flores-Saviaga
Northeastern University, Northeastern University, Massachusetts, United States
Benjamin V.. Hanrahan
Microsoft, Redmond, Washington, United States
Kashif Imteyaz
Northeastern University, Boston, Massachusetts, United States
Steven Clarke
Microsoft, Edinburgh, United Kingdom
Saiph Savage
Northeastern University, Boston, Massachusetts, United States
DOI

10.1145/3706598.3714008

論文URL

https://dl.acm.org/doi/10.1145/3706598.3714008

動画
The Sky is the Limit: Understanding How Generative AI can Enhance Screen Reader Users' Experience with Productivity Applications
要旨

Productivity applications including word processors, spreadsheets, and presentation tools are crucial in work, education, and personal settings. Blind users typically access these tools via screen readers (SRs) and face significant accessibility and usability challenges. Recent advancements in Generative AI (GenAI) may address these challenges by enabling natural language interactions and contextual task understanding. However, there is limited understanding of SR users’ needs and attitudes toward GenAI assistance in these applications. We surveyed 99 SR users to gain a holistic understanding of the challenges they face when using productivity applications, the impact of these challenges on their productivity and independence, and their initial perceptions of AI assistance. Driven by their enthusiasm, we conducted interviews with 16 SR users to explore their attitudes toward GenAI and its potential usefulness in productivity applications. Our findings highlight its need to support existing SR workflows and the importance of enabling customization and task verification.

著者
Minoli Perera
Monash University, Clayton, Victoria, Australia
Swamy Ananthanarayan
Monash University, Melbourne, Victoria, Australia
Cagatay Goncu
Monash University, Melbourne, Victoria, Australia
Kim Marriott
Monash University, Melbourne, Australia
DOI

10.1145/3706598.3713634

論文URL

https://dl.acm.org/doi/10.1145/3706598.3713634

動画
Light My Way. Developing and Exploring a Multimodal Interface to Assist People With Visual Impairments to Exit Highly Automated Vehicles
要旨

The introduction of Highly Automated Vehicles (HAVs) has the potential to increase the independence of blind and visually impaired people (BVIPs). However, ensuring safety and situation awareness when exiting these vehicles in unfamiliar environments remains challenging. To address this, we conducted an interactive workshop with N=5 BVIPs to identify their information needs when exiting an HAV and evaluated three prior-developed low-fidelity prototypes. The insights from this workshop guided the development of PathFinder, a multimodal interface combining visual, auditory, and tactile modalities tailored to BVIP's unique needs. In a three-factorial within-between-subject study with N=16 BVIPs, we evaluated PathFinder against an auditory-only baseline in urban and rural scenarios. PathFinder significantly reduced mental demand and maintained high perceived safety in both scenarios, while the auditory baseline led to lower perceived safety in the urban scenario compared to the rural one. Qualitative feedback further supported PathFinder's effectiveness in providing spatial orientation during exiting.

著者
Luca-Maxim Meinhardt
Institute of Media Informatics, Ulm, Germany
Lina Madlin. Weilke
Ulm University, Ulm, Germany
Maryam Elhaidary
Universität Ulm, Ulm, Baden-Württemberg, Germany
Julia von Abel
Universität Ulm, Ulm, Germany
Paul D. S.. Fink
The University of Maine, Orono, Maine, United States
Michael Rietzler
Institute of Mediainformatics, Ulm, Germany
Mark Colley
Ulm University, Ulm, Germany
Enrico Rukzio
University of Ulm, Ulm, Germany
DOI

10.1145/3706598.3713454

論文URL

https://dl.acm.org/doi/10.1145/3706598.3713454

動画
I-Scratch: Independent Slide Creation With Auditory Comment and Haptic Interface for the Blind and Visually Impaired
要旨

Presentation software still holds barriers to independent creation for blind and visually impaired users (BVIs) due to its visual-centric interface. To address this gap, we introduce I-Scratch, a multimodal system which empowers BVIs to independently create, explore, and edit PowerPoint slides. We initially designed I-Scratch to tackle the practical challenges faced by BVIs and refined I-Scratch to improve its usability and accessibility through iterative participatory sessions involving a blind user. I-Scratch integrates a graphical tactile display with auditory guidance for multimodal feedback, simplifies the user interface, and leverages AI technologies for visual assistance in image generation and content interpretation. A user study with ten BVIs demonstrated that I-Scratch enables them to produce visually coherent and aesthetically pleasing slides independently, achieving 91.25% of full and partial successes with a CSI score of 85.07. We present five guidelines and future directions to support the creative work of BVIs using presentation software.

著者
Gyeongdeok Kim
Gwangju Institute of Science and Technology, Gwangju, Korea, Republic of
Chungman Lim
Gwangju Institute of Science and Technology, Gwangju, Korea, Republic of
Gunhyuk Park
Gwangju Institute of Science and Technology, Gwangju, --- Select One ---, Korea, Republic of
DOI

10.1145/3706598.3713553

論文URL

https://dl.acm.org/doi/10.1145/3706598.3713553

動画
ScreenAudit: Detecting Screen Reader Accessibility Errors in Mobile Apps Using Large Language Models
要旨

Many mobile apps are inaccessible, thereby excluding people from their potential benefits. Existing rule-based accessibility checkers aim to mitigate these failures by identifying errors early during development but are constrained in the types of errors they can detect. We present ScreenAudit, an LLM-powered system designed to traverse mobile app screens, extract metadata and transcripts, and identify screen reader accessibility errors overlooked by existing checkers. We recruited six accessibility experts including one screen reader user to evaluate ScreenAudit's reports across 14 unique app screens. Our findings indicate that ScreenAudit achieves an average coverage of 69.2%, compared to only 31.3% with a widely-used accessibility checker. Expert feedback indicated that ScreenAudit delivered higher-quality feedback and addressed more aspects of screen reader accessibility compared to existing checkers, and that ScreenAudit would benefit app developers in real-world settings.

著者
Mingyuan Zhong
University of Washington, Seattle, Washington, United States
Ruolin Chen
University of Washington, Seattle, Washington, United States
Xia Chen
Carnegie Mellon University, Pittsburgh, Pennsylvania, United States
James Fogarty
University of Washington, Seattle, Washington, United States
Jacob O.. Wobbrock
University of Washington, Seattle, Washington, United States
DOI

10.1145/3706598.3713797

論文URL

https://dl.acm.org/doi/10.1145/3706598.3713797

動画