AI tools like ChatGPT and Be-My-AI are increasingly being used by blind individuals. Although prior work has explored their use in some Do-It-Yourself (DIY) tasks by blind individuals, little is known about how they use these tools and the available product-manual resources to assemble, operate, and troubleshoot physical/tangible products – tasks requiring spatial reasoning, structural understanding, and precise execution. We address this knowledge gap via an interview study and a usability study with blind participants, investigating how they leverage AI tools and product manuals for DIY tasks with physical products. Findings show that manuals are essential resources, but product-manual instructions are often inadequate for blind users. AI tools presently do not adequately address this insufficiency, in fact, we observed that they often exacerbate this issue with incomplete, incoherent, or misleading guidance. Lastly, we suggest improvements to AI tools for generating tailored instructions for blind users’ DIY tasks involving tangible products.
Disability Studies and Accessibility HCI document what design elements are (in)accessible to disabled communities and illuminate technological ableism. However, Disability Studies' systemic critique rarely includes roadmaps for design. We articulate a roadmap by marrying HCI's infrastructuring theory with Disability Studies into an approach called Infrastructuring for Access. Focusing on dyslexic writers' experiences with spell checkers, we demonstrate Infrastructuring for Access in a collaborative design process. We co-designed software to address limitations of spell checkers and conducted an eight-month field deployment. Our technological contribution is Jargon Manager, a toolkit with a browser extension for writers to opportunistically save terms in a custom dictionary and then use later via a word processor extension. Our theory contribution moves from a space of critique into a space of repair: Infrastructuring for Access expands the design space from only removing barriers to also institutionalizing disabled practitioners' existing workarounds, therefore alleviating access labor and broadening participation.
Accessibility of digital services needs to be improved, in particular for users with cognitive and learning disabilities. Our focus is on digital design patterns that have the potential to increase comprehension for users with low cognitive and reading skills by combining short and simple text with effective design. We conducted a case study with 43 easy-to-read (ETR) users and 12 university students in a control group in which users select a theater play, find a date, and order tickets. In a participatory design process with four co-researchers and a User Experience (UX) expert, we created test material with high ecological validity. Subsequent testing collected quantitative and qualitative data to provide a clear picture of the context of use, the differences in success of the three variants, and the challenges of a multi-step ordering process. Our contributions are an enhanced participatory design process involving UX experts, insights into ETR users’ context of use, and proposals for cognitively accessible design patterns for ticket selection, appointments, and an exemplary simplified ordering process that can be applied to similar digital services.
Accessibility forums and, more recently, generative AI tools have become vital resources for blind users seeking solutions to computer-interaction issues and learning about new assistive technologies, screen reader features, tutorials, and software updates. Understanding user experiences with these resources is essential for identifying and addressing persistent support gaps. Towards this, we interviewed 14 blind users who regularly engage with forums and GenAI tools. Findings revealed that forums often overwhelm users with multiple overlapping topics, redundant or irrelevant content, and fragmented responses that must be mentally pieced together, increasing cognitive load. GenAI tools, while offering more direct assistance, introduce new barriers by producing unreliable answers, including overly verbose or fragmented guidance, fabricated information, and contradictory suggestions that fail to follow prompts, thereby heightening verification demands. Based on these insights, we outlined design opportunities to improve the reliability of assistive resources, aiming to provide blind users with more trustworthy and cognitively-manageable support.
Situational visual impairments (SVIs) hinder mobile readability, causing discomfort and limiting information access. Building on prior work in adaptive typography and accessibility, this paper presents SituFont, a context-aware and human-in-the-loop adaptive typography adjustment approach that enhances smartphone mobile readability by dynamically adjusting font parameters based on real-time contextual changes. Using smartphone sensors and a human-in-the-loop approach, SituFont personalizes text presentation to accommodate personal factors (e.g., fatigue, distraction) and environmental conditions (e.g., lighting, motion, location). To inform its design, we conducted formative interviews (N=15) to identify key SVI factors and controlled experiments (N=18) to quantify their impact on optimal text parameters. A comparative user study (N=12) across eight simulated SVI scenarios demonstrated SituFont's effectiveness in improving smartphone mobile readability in terms of improved efficiency and reduced workload compared with a non-trivial manual adjustment baseline.
Generative AI (GenAI) tools are increasingly used for spreadsheet tasks, yet little is known about how blind users verify their outputs in accuracy-critical contexts. We conducted a study with 12 blind spreadsheet users to explore verification practices across tasks such as information extraction, formula generation, trend analysis, chart creation, and formatting. Participants never fully trusted outputs without verification and employed diverse strategies, including manual checks with screen reader and spreadsheet features, same AI-assisted verification, cross-AI tool validation, leveraging prior knowledge, and human assistance. These approaches were adapted based on task context, perceived risk, and users’ expertise. Errors were common, particularly in chart generation and formatting, some detected, others overlooked. While verification improved confidence, it was often effortful, time-consuming, or infeasible for visual tasks. We discuss how blind users utilize GenAI not only as a task performer but also as a verification aid and validator, highlighting design opportunities for more accessible and reliable spreadsheet use.