Tactile graphics communicate images and spatial information to blind and low vision (BLV) audiences via touch. However, designing and producing tactile graphics is laborious and often inaccessible to BLV people themselves. We interviewed 14 BLV adults with experience both using and creating tactile graphics to understand their current and desired practices. We found that tactile graphics are intensely valued by many, but that access to and fluency with tactile graphics are compounding challenges. To produce tactile graphics, BLV makers constantly navigate tradeoffs between accessible, low-fidelity craft materials and less accessible, high-fidelity equipment. Going forward, we argue that tactile graphics design and production should be made widely accessible and that tactile graphics themselves should be designed to be expressive and ubiquitous. Drawing from these design goals, we propose specific future tools with features for inclusive designing, sharing, and (re)production of tactile graphics.
As blind and low-vision (BLV) players engage more deeply with games, accessibility features have become essential. While some research has explored tools and strategies to enhance game accessibility, the specific experiences of these players with mobile games remain underexamined. This study addresses this gap by investigating how BLV users experience mobile games with varying accessibility levels. Through interviews with 32 experienced BLV mobile players, we explore their perceptions, challenges, and strategies for engaging with mobile games. Our findings reveal that BLV players turn to mobile games to alleviate boredom, achieve a sense of accomplishment, and build social connections, but face barriers depending on the game's accessibility level. We also compare mobile games to other forms of gaming, highlighting the relative advantages of mobile games, such as the inherent accessibility of smartphones. This study contributes to understanding BLV mobile gaming experiences and provides insights for enhancing accessible mobile game design.
The rapid adoption of generative AI in software development has impacted the industry, yet its effects on developers with visual impairments remain largely unexplored. To address this gap, we used an Activity Theory framework to examine how developers with visual impairments interact with AI coding assistants. For this purpose, we conducted a study where developers who are visually impaired completed a series of programming tasks using a generative AI coding assistant. We uncovered that, while participants found the AI assistant beneficial and reported significant advantages, they also highlighted accessibility challenges. Specifically, the AI coding assistant often exacerbated existing accessibility barriers and introduced new challenges. For example, it overwhelmed users with an excessive number of suggestions, leading developers who are visually impaired to express a desire for "AI timeouts.'' Additionally, the generative AI coding assistant made it more difficult for developers to switch contexts between the AI-generated content and their own code. Despite these challenges, participants were optimistic about the potential of AI coding assistants to transform the coding experience for developers with visual impairments. Our findings emphasize the need to apply activity-centered design principles to generative AI assistants, ensuring they better align with user behaviors and address specific accessibility needs. This approach can enable the assistants to provide more intuitive, inclusive, and effective experiences, while also contributing to the broader goal of enhancing accessibility in software development.
Productivity applications including word processors, spreadsheets, and presentation tools are crucial in work, education, and personal settings. Blind users typically access these tools via screen readers (SRs) and face significant accessibility and usability challenges. Recent advancements in Generative AI (GenAI) may address these challenges by enabling natural language interactions and contextual task understanding. However, there is limited understanding of SR users’ needs and attitudes toward GenAI assistance in these applications. We surveyed 99 SR users to gain a holistic understanding of the challenges they face when using productivity applications, the impact of these challenges on their productivity and independence, and their initial perceptions of AI assistance. Driven by their enthusiasm, we conducted interviews with 16 SR users to explore their attitudes toward GenAI and its potential usefulness in productivity applications. Our findings highlight its need to support existing SR workflows and the importance of enabling customization and task verification.
The introduction of Highly Automated Vehicles (HAVs) has the potential to increase the independence of blind and visually impaired people (BVIPs). However, ensuring safety and situation awareness when exiting these vehicles in unfamiliar environments remains challenging. To address this, we conducted an interactive workshop with N=5 BVIPs to identify their information needs when exiting an HAV and evaluated three prior-developed low-fidelity prototypes. The insights from this workshop guided the development of PathFinder, a multimodal interface combining visual, auditory, and tactile modalities tailored to BVIP's unique needs. In a three-factorial within-between-subject study with N=16 BVIPs, we evaluated PathFinder against an auditory-only baseline in urban and rural scenarios. PathFinder significantly reduced mental demand and maintained high perceived safety in both scenarios, while the auditory baseline led to lower perceived safety in the urban scenario compared to the rural one. Qualitative feedback further supported PathFinder's effectiveness in providing spatial orientation during exiting.
Presentation software still holds barriers to independent creation for blind and visually impaired users (BVIs) due to its visual-centric interface. To address this gap, we introduce I-Scratch, a multimodal system which empowers BVIs to independently create, explore, and edit PowerPoint slides. We initially designed I-Scratch to tackle the practical challenges faced by BVIs and refined I-Scratch to improve its usability and accessibility through iterative participatory sessions involving a blind user. I-Scratch integrates a graphical tactile display with auditory guidance for multimodal feedback, simplifies the user interface, and leverages AI technologies for visual assistance in image generation and content interpretation. A user study with ten BVIs demonstrated that I-Scratch enables them to produce visually coherent and aesthetically pleasing slides independently, achieving 91.25% of full and partial successes with a CSI score of 85.07. We present five guidelines and future directions to support the creative work of BVIs using presentation software.
Many mobile apps are inaccessible, thereby excluding people from their potential benefits. Existing rule-based accessibility checkers aim to mitigate these failures by identifying errors early during development but are constrained in the types of errors they can detect. We present ScreenAudit, an LLM-powered system designed to traverse mobile app screens, extract metadata and transcripts, and identify screen reader accessibility errors overlooked by existing checkers. We recruited six accessibility experts including one screen reader user to evaluate ScreenAudit's reports across 14 unique app screens. Our findings indicate that ScreenAudit achieves an average coverage of 69.2%, compared to only 31.3% with a widely-used accessibility checker. Expert feedback indicated that ScreenAudit delivered higher-quality feedback and addressed more aspects of screen reader accessibility compared to existing checkers, and that ScreenAudit would benefit app developers in real-world settings.