Augmented communicators (ACs) use augmentative and alternative communication (AAC) technologies to speak. Prior work in AAC research has looked to improve efficiency and expressivity of AAC via device improvements and user training. However, ACs also face constraints in communication beyond their device and individual abilities such as when they can speak, what they can say, and who they can address. In this work, we recast and broaden this prior work using conversational agency as a new frame to study AC communication. We investigate AC conversational agency with a study examining different conversational tasks between four triads of expert ACs, their close conversation partners (paid aide or parent), and a third party (experimenter). We define metrics to analyze AAC conversational agency quantitatively and qualitatively. We conclude with implications for future research to enable ACs to easily exercise conversational agency.
https://doi.org/10.1145/3313831.3376376
Nonspeaking individuals with motor disabilities typically have very low communication rates. This paper proposes a design engineering approach for quantitatively exploring context-aware sentence retrieval as a promising complementary input interface, working in tandem with a word-prediction keyboard. We motivate the need for complementary design engineering methodology in the design of augmentative and alternative communication and explain how such methods can be used to gain additional design insights. We then study the theoretical performance envelopes of a context-aware sentence retrieval system, identifying potential keystroke savings as a function of the parameters of the subsystems, such as the accuracy of the underlying auto-complete word prediction algorithm and the accuracy of sensed context information under varying assumptions. We find that context-aware sentence retrieval has the potential to provide users with considerable improvements in keystroke savings under reasonable parameter assumptions of the underlying subsystems. This highlights how complementary design engineering methods can reveal additional insights into design for augmentative and alternative communication.
The number of people with vision impairments using Conversational Agents (CAs) has increased because of the potential of this technology to support them. As many visually impaired people are accustomed to understanding fast speech, most screen readers or voice assistant systems offer speech rate settings. However, current CAs are designed to interact at a human-like speech rate without considering their accessibility. In this study, we tried to understand how people with vision impairments use CA at a fast speech rate. We conducted a 20-day in-home study that examined the CA use of 10 visually impaired people at default and fast speech rates. We investigated the difference in visually impaired people's CA use with different speech rates and their perception toward CA at each rate. Based on these findings, we suggest considerations for the future design of CA speech rate for those with visual impairments.
Automatic Text Simplification (ATS), which replaces text with simpler equivalents, is rapidly improving. While some research has examined ATS reading-assistance tools, little has examined preferences of adults who are deaf or hard-of-hearing (DHH), and none empirically evaluated lexical simplification technology (replacement of individual words) with these users. Prior research has revealed that U.S. DHH adults have lower reading literacy on average than their hearing peers, with unique characteristics to their literacy profile. We investigate whether DHH adults perceive a benefit from lexical simplification applied automatically or when users are provided with greater autonomy, with on-demand control and visibility as to which words are replaced. Formative interviews guided the design of an experimental study, in which DHH participants read English texts in their original form and with lexical simplification applied automatically or on-demand. Participants indicated that they perceived a benefit form lexical simplification, and they preferred a system with on-demand simplification.
People with visual impairments (PVI) must interact with a world they cannot see. Remote sighted assistance (RSA) has emerged as a conversational assistive technology. We interviewed RSA assistants ("agents") who provide assistance to PVI via a conversational prosthetic called Aira (https://aira.io/) to understand their professional practice. We identified four types of support provided: scene description, navigation, task performance, and social engagement. We discovered that RSA provides an opportunity for PVI to appropriate the system as a richer conversational/social support tool. We studied and identified patterns in how agents provide assistance and how they interact with PVI as well as the challenges and strategies associated with each context. We found that conversational interaction is highly context-dependent. We also discuss implications for design.