Augmenting expression and communication

会議の名前
CHI 2026
A11yExtensions: Accessibility Extensions to Augment Mobile AI Assistive Technology In-Situ
要旨

Existing visual AI assistive technologies have usability gaps, and may need additional adaptations and features to serve users' needs. We propose A11yExtensions, in-situ interventions that augment existing mobile AI assistive technology with add-on services. Add-ons include features that have been researched but are not yet deployed (e.g., cross-checking AI results), or that are only available in certain applications (e.g., camera aiming assistance). Through co-design sessions with two blind accessibility professionals, we designed and implemented three exemplar extensions, leveraging mobile automation tools to invoke add-ons, enabling just-in-time interventions for adaptability. We found that A11yExtensions provide opportunities to test new features and a new degree of flexibility and customization, though they introduce additional onboarding and communication challenges. We also derived a design space of accessibility extensions as a basis for future extension designs. Overall, A11yExtensions is a demonstration of the effectiveness of deploying new features in-situ via automation, with the technologies people actually use in their day-to-day lives.

著者
Jaylin Herskovitz
University of Michigan, Ann Arbor, Michigan, United States
Ellie Seehorn
University of Michigan, Ann Arbor, Michigan, United States
Ather Jammoa
Independent Consultant, Detroit, Michigan, United States
Jason Meddaugh
Principal, A.T. Guys, Kalamazoo, Michigan, United States
Anhong Guo
University of Michigan, Ann Arbor, Michigan, United States
I, Robot? Exploring Ultra-Personalized AI-Powered AAC; an Autoethnographic Account
要旨

Generic AI auto-complete for message composition often fails to capture the nuance of personal identity, requiring editing. While harmless in low-stakes settings, for users of Augmentative and Alternative Communication (AAC) devices, who rely on such systems to communicate, this burden is severe. Intuitively, the need for edits would be lower if language models were personalized to the specific user's communication.   While personalization is technically feasible, it raises questions about how such systems affect AAC users’ agency, identity, and privacy. We conducted an autoethnographic study in three phases: (1) seven months of collecting all the lead author’s AAC communication data, (2) fine-tuning a model on this dataset, and (3) three months of daily use of personalized AI suggestions. We observed that: logging everyday conversations reshaped the author’s sense of agency, model training selectively amplified or muted aspects of his identity, and suggestions occasionally resurfaced private details outside their original context.   We find that ultra-personalized AAC reshapes communication by continually renegotiating agency, identity, and privacy between user and model. We highlight design directions for building personalized AAC technology that supports expressive, authentic communication.

著者
Tobias M. Weinberg
Cornell Tech, New york, New York, United States
Ricardo E.. Gonzalez Penuela
Cornell Tech, Cornell University, New York, New York, United States
Stephanie Valencia
University of Maryland College Park, College Park, Maryland, United States
Thijs Roumen
Cornell Tech, New York, New York, United States
動画
Do-It-Yourself AAC: Co-Designing User-Programmable AI Communication Tools with People with Aphasia
要旨

Aphasia, which is a language disorder that affects a person's ability to communicate, can present profound challenges in one's daily life. Augmentative and Alternative Communication (AAC) technology can support people with aphasia (PWA) in navigating such challenges, but is often difficult to customize and can overlook personal communication needs. End-user programming, which allows a user to develop and use their own custom programs, presents a promising solution where PWA can create their own personalized solutions. With recent research highlighting how generative AI can be helpful for AAC users, we propose a visual user-programming method that enables PWA to combine multiple AI functions to create personalized communication tools. We conducted semi-structured interviews and co-design workshops with eight participants with aphasia to understand how user-programmed AI systems can address personal communication needs. We present PWA's perspectives on user-programmable communication tools and discuss the accessibility of visual programming methods for creating aphasia-friendly AI systems.

著者
Jong Ho Lee
University of Maryland, College Park, College Park, Maryland, United States
Stephanie Valencia
University of Maryland, College Park, Maryland, United States
Giving Meaning to Movements: Challenges and Opportunities in Expanding Communication by Pairing Unaided AAC with Speech Generated Messages
要旨

Augmentative and Alternative Communication (AAC) technologies are categorized into two forms: aided AAC, which uses external devices like speech-generating systems to produce standardized output, and unaided AAC, which relies on body-based gestures for natural expression but requires shared understanding. We investigate how to combine these approaches to harness the speed and naturalness of unaided AAC while maintaining the intelligibility of aided AAC, a largely unexplored area for individuals with communication and motor impairments. Through 18 months of participatory design with AAC users, we identified key challenges and opportunities and developed AllyAAC, a wearable system with a wrist-worn IMU paired with a smartphone app. We evaluated AllyAAC in a field study with 14 participants and produced a dataset containing over 600,000 multimodal data points featuring atypical gestures—the first of its kind. Our findings reveal challenges in recognizing personalized, idiosyncratic gestures and demonstrate how to address them using Transformer-based large machine learning (ML) models with different pretraining strategies. In sum, we contribute design principles and a reference implementation for adaptive, personalized systems combining aided and unaided AAC.

著者
Imran Kabir
Pennsylvania State University, University Park, Pennsylvania, United States
Sharon Ann. Redmon
Pennsylvania State University, University Park, Pennsylvania, United States
Lynn R. Elko
See CVI, Speak AAC, Tamaqua, Pennsylvania, United States
Kevin Williams
L.L. Slim LLC., Charlotte, North Carolina, United States
Mitchell A Case
Pennsylvania State University, University Park, Pennsylvania, United States
Dawn J Sowers
Florida State University, Tallahassee, Florida, United States
Wilkinson Krista
Pennsylvania State University, University Park, Pennsylvania, United States
Syed Masum Billah
Pennsylvania State University, University Park , Pennsylvania, United States
Creating Disability Story Videos with Generative AI: Motivation, Expression, and Sharing
要旨

Generative AI (GenAI) is both promising and challenging in supporting people with disabilities (PwDs) in creating stories about disability. GenAI can reduce barriers to media production and inspire the creativity of PwDs, but it may also introduce biases and imperfections that hinder its adoption for personal expression. In this research, we examine how nine PwD from a disability advocacy group used GenAI to create videos sharing their disability experiences. Grounded in digital storytelling theory, we explore the motivations, expression, and sharing of PwD-created GenAI story videos. We conclude with a framework of momentous depiction, which highlights four core affordances of GenAI that either facilitate or require improvements to better support disability storytelling: non-capturable depiction, identity concealment and representation, contextual realism and consistency, and emotional articulation. Based on this framework, we further discuss design implications for GenAI in relation to story completion, media formats, and corrective mechanisms.

著者
Shuo Niu
Clark University, Worcester, Massachusetts, United States
Dylan Clements
Clark University, Worcester, Massachusetts, United States
Hyungsin Kim
Clark University, Worcester, Massachusetts, United States
“I followed what felt right, not what I was told”: Autonomy, Coaching, and Recognizing Bias Through AI-Mediated Dialogue
要旨

Ableist microaggressions remain pervasive in everyday interactions, yet interventions to help people recognize them are limited. We present an experiment testing how AI-mediated dialogue influences recognition of ableism. 160 participants completed a pre-test, intervention, and a post-test across four conditions: AI nudges toward bias (Bias-Directed), inclusion (Neutral-Directed), unguided dialogue (Self-Directed), and a text-only non-dialogue (Reading). Participants rated scenarios on standardness of social experience and emotional impact; those in dialogue-based conditions also provided qualitative reflections. Quantitative results showed dialogue-based conditions produced stronger recognition than Reading, though trajectories diverged: biased nudges improved differentiation of bias from neutrality but increased overall negativity. Inclusive or no nudges remained more balanced, while Reading participants showed weaker gains and even declines. Qualitative findings revealed biased nudges were often rejected, while inclusive nudges were adopted as scaffolding. We contribute a validated vignette corpus, an AI-mediated intervention platform, and design implications highlighting trade-offs conversational systems face when integrating bias-related nudges.

著者
Atieh Taheri
Carnegie Mellon University, Pittsburgh, Pennsylvania, United States
Hamza El Alaoui
Carnegie Mellon University, Pittsburgh, Pennsylvania, United States
Patrick Carrington
Carnegie Mellon University, Pittsburgh, Pennsylvania, United States
Jeffrey P. Bigham
Carnegie Mellon University, Pittsburgh, Pennsylvania, United States
EmojiFan: Designing A Social Interface Supporting Facial Expression Interaction for Blind and Low Vision People in Party Settings
要旨

Facial expression interactions play a crucial role in fostering social bonds and expressing emotions. However, in the dynamic, fast-paced, and noisy environments of parties, various factors hinder blind and low-vision individuals from engaging fully in facial expression interactions. While previous research has explored how BLV users can convey emotions through non-verbal visual cues, it has largely overlooked the challenges they face in engaging with facial expressions after perceiving these cues. To address this gap, we conducted a formative study with 10 BLV users to identify their challenges and expectations regarding facial expression interactions in parties. Guided by these insights, we developed EmojiFan, an AI-powered smart fan designed to offer a personalized representation of facial expressions through dynamic, expressive emojis. Finally, we carried out an in-the-field study with 6 BLV participants and 8 sighted social partners to examine the effectiveness of EmojiFan in enhancing facial-expression interactions during parties. Overall, our goal is to empower BLV individuals' autonomy to actively participate in social interactions through digital facial expression, thereby contributing new insights for the accessibility community on designing expressive, socially responsive assistive technologies.

著者
Jinlin Miao
China Academy of Art, Hangzhou, China
Shan Luo
China Academy of Art, Hangzhou, China
Yue Chen
China Academy of Art, Hangzhou, China
Hongyue Wang
Monash University, Melbourne, Victoria, Australia
Zhejun Zhang
China Academy of Art, Hangzhou, China
Rina R.. Wehbe
Dalhousie University, Halifax, Nova Scotia, Canada