A Personalized and Adaptable User Interface for a Speech and Cursor Brain-Computer Interface
説明

Communication and computer interaction are important for autonomy in modern life. Unfortunately, these capabilities can be limited or inaccessible for the millions of people living with paralysis. While implantable brain-computer interfaces (BCIs) show promise for restoring these capabilities, little has been explored on designing BCI user interfaces (UIs) for sustained daily use. Here, we present a personalized UI for an intracortical BCI system that enables users with severe paralysis to communicate and interact with their computers independently. Through a 22-month longitudinal deployment with one participant, we used iterative co-design to develop a system for everyday at-home use and documented how it evolved to meet changing needs. We then adapted the same framework to a second participant with different BCI control methods, demonstrating the interface's adaptability across users. Our findings highlight how personalization and adaptability enabled independence in daily life and provide design implications for developing future BCI assistive technologies.

日本語まとめ
読み込み中…
読み込み中…
From Struggle to Success: Context-Aware Guidance for Screen Reader Users in Computer Use
説明

Equal access to digital technologies is critical for education, employment, and social participation.

However, mainstream interfaces are visually oriented, creating steep learning curves and frequent obstacles for screen reader users, and limiting their independence and opportunities.

Existing support is inadequate---tutorials mainly target sighted users, while human assistance lacks real-time availability.

We introduce AskEase, an on-demand AI assistant that provides step-by-step, screen reader user-friendly guidance for computer use.

AskEase manages multiple sources of context to infer user intent and deliver precise, situation-specific guidance.

Its seamless interaction design minimizes disruption and reduces the effort of seeking help.

We demonstrated its effectiveness through representative usage scenarios and robustness tests.

In a within-subjects study with 12 screen reader users, AskEase significantly improved task success while reducing perceived workload, including physical demand, effort, and frustration.

These results demonstrate the potential of LLM-powered assistants to promote accessible computing and expand opportunities for users with visual impairments.

日本語まとめ
読み込み中…
読み込み中…
Programmers Who Use Screen Readers in the Vibe Coding Era: Adaptation, Empowerment, and New Accessibility Landscape
説明

Generative AI agents are reshaping human-computer interaction, shifting users from direct task execution to supervising machine-driven actions, especially the rise of ``\emph{vibe coding}'' in programming.

Yet little is known about how programmers who use screen readers interact with AI code assistants in practice.

We conducted a longitudinal study with 16 blind and low-vision programmers.

Participants completed a \emph{GitHub Copilot} tutorial, engaged with a programming task, and provided initial feedback.

After two weeks of AI-assisted programming, follow-ups examined how their practices and perceptions evolved.

Our findings show that code assistants enhanced programming efficiency and bridged accessibility gaps.

However, participants struggled to convey intent, interpret AI outputs, and manage multiple views while maintaining situational awareness.

They showed diverse preferences for accessibility features, expressed a need to balance automation with control, and encountered barriers when learning to use these tools.

Furthermore, we propose design principles and recommendations for more accessible and inclusive human-AI collaborations.

日本語まとめ
読み込み中…
読み込み中…
AI That Moves With You: A Review of Interactive Technologies Powered by Large Foundation Models for Mobility Impairment
説明

Large foundation models (FMs) -- including large language model (LLM), large vision model (LVM), vision language model (VLM), and related variants -- are rapidly reshaping interactive assistive technologies during past years. We present a review of FM-enabled interactive systems for people with mobility impairments, covering work published from January 2020 to May 2025. Searching five databases, we screened 6,249 records and included 27 full papers. We first summerize descriptive results including study design and evaluation approaches of the reviewed studies. We then synthesize FM techniques, model integration patterns, interaction paradigms, and mobility impairment contexts. Our analysis surfaces and distills both technical and ethical challenges existed, lighting up future research topics. We contribute: (i) a conceptualization of FM-enabled interactions for mobility impairment functioning as a design space; (ii) a tabulated corpus with a reproducible codebook; and (iii) a forward agenda to guide and inspire the design of future mobility-assistance interactive systems within human-computer interaction (HCI) and CHI community.

日本語まとめ
読み込み中…
読み込み中…
Imagine, Interact: Eliciting Accessible Interactions from Users with Motor Impairments via Imagined Input Devices
説明

We present empirical results from a study conducted with eleven users with upper-body motor impairments who imagined input devices and corresponding gestures to operate them for performing common tasks in interactive systems. We report a strong preference for embodying devices (80%), primarily through the hands, rather than holding them, and identify ten device archetypes, among which smartphones (36.4%) and remote controls (27.3%) were most prevalent. We also observed a diversity of gestures to operate imagined devices involving both unimanual and bimanual input with little consensus (.069) across participants, which we analyzed in relation to their self-reported motor impairments. Based on these findings, we propose design recommendations for accessible interactions involving imagined input devices, structured through the lenses of ability-based and ability-mediating design. We also outline future work opportunities for imagination-powered accessible computing, in which users' imagination plays a central role.

日本語まとめ
読み込み中…
読み込み中…
TaskAudit: Detecting Functiona11ity Errors in Mobile Apps via Agentic Task Execution
説明

Accessibility checkers are tools in support of accessible app development, and their use is encouraged by accessibility best practices. However, most current checkers evaluate static or mechanically-generated contexts, failing to capture common accessibility errors impacting mobile app functionality. In this work, we define functiona11ity errors as accessibility barriers that only manifest through interaction (i.e., named according to a blend of “functionality” and “accessibility”). We introduce TaskAudit, which comprises three components: a Task Generator that constructs interactive tasks from app screens, a Task Executor that uses agents with a screen reader proxy to perform these tasks, and an Accessibility Analyzer that detects and reports accessibility errors by examining interaction traces. Our evaluation on real-world apps shows that TaskAudit detects 48 functiona11ity errors from 54 app screens, compared to between 4 and 20 with existing checkers. Our analysis demonstrates common error patterns that TaskAudit can detect in addition to those from prior work, including label-functionality mismatch, cluttered navigation, and inappropriate feedback.

日本語まとめ
読み込み中…
読み込み中…
Robot-Assisted Group Tours for Blind People
説明

Group interactions are essential to social functioning, yet effective engagement relies on the ability to recognize and interpret visual cues, making such engagement a significant challenge for blind people. In this paper, we investigate how a mobile robot can support group interactions for blind people. We used the scenario of a guided tour with mixed-visual groups involving blind and sighted visitors. Based on insights from an interview study with blind people (n=5) and museum experts (n=5), we designed and prototyped a robotic system that supported blind visitors to join group tours. We conducted a field study in a science museum where each blind participant (n=8) joined a group tour with one guide and two sighted participants (n=8). Findings indicated users' sense of safety from the robot's navigational support, concerns in the group participation, and preferences for obtaining environmental information. We present design implications for future robotic systems to support blind people's mixed-visual group participation.

日本語まとめ
読み込み中…
読み込み中…