Supporting Communication Needs A

会議の名前
CHI 2024
Lights, Camera, Access: A Closeup on Audiovisual Media Accessibility and Aphasia
要旨

The presence of audiovisual media is a mainstay in the lives of many, increasingly so with technological progress. Accessing video and audio content, however, can be challenging for people with diverse needs. Existing research has explored a wide range of accessibility challenges and worked with disabled communities to design technologies that help bridge the access gap. Despite this work, our understanding of the challenges faced by communities with complex communication needs (CCNs) remains poor. To address this shortcoming, we present the first study that investigates the viewing experience of people with the communication impairment aphasia through an online survey (N=41) and two focus group sessions (N=10), with the aim of understanding their specific access challenges. We find that aphasia significantly impact viewing experience and present a taxonomy of access barriers and facilitators, with suggestions for future research.

著者
Alexandre Nevsky
King's College London, London, United Kingdom
Timothy Neate
King's College London, London, United Kingdom
Elena Simperl
King's College London, London, United Kingdom
Madeline N. Cruice
City, University of London, London, London, United Kingdom
論文URL

https://doi.org/10.1145/3613904.3641893

動画
Co-Designing QuickPic: Automated Topic-Specific Communication Boards from Photographs for AAC-Based Language Instruction
要旨

Traditional topic-specific communication boards for Augmentative and Alternative Communication (AAC) require manual programming of relevant symbolic vocabulary, which is time-consuming and often impractical even for experienced Speech-Language Pathologists (SLPs). While recent research has demonstrated the potential to automatically generate these boards from photographs using artificial intelligence, there has been no exploration on how to design such tools to support the specific needs of AAC-based language instruction. This paper introduces QuickPic, a mobile AAC application co-designed with SLPs and special educators, aimed at enhancing language learning for non-speaking individuals, such as autistic children. Through a 17-month design process, we uncover the unique design features required to provide timely language support in therapy and special education contexts. We present emerging evidence on the overall satisfaction of SLPs using QuickPic, and on the advantages of large language model-based generation compared to the existing technique for automated vocabulary from photographs for AAC.

著者
Mauricio Fontana de Vargas
McGill University, Montreal, Quebec, Canada
Christina Yu
Boston Children's Hospital, Boston, Massachusetts, United States
Howard C. Shane
Boston Children’s Hospital, Boston, Massachusetts, United States
Karyn Moffatt
McGill University, Montreal, Quebec, Canada
論文URL

https://doi.org/10.1145/3613904.3642080

動画
Empowering Independence Through Design: Investigating Standard Digital Design Patterns For Easy-to-Read Users.
要旨

As designers and researchers, it is our duty to ensure information accessibility for all, irrespective of cognitive abilities. Currently, Easy-to-Read (ETR) is commonly used to simplify text for individuals with cognitive impairments. Although design aspects of text comprehensibility have recently gained attention, digital design patterns remain relatively unexplored. Our understanding of how ETR users interact with digital media, and how to design specifically for their needs, is still limited. Our study involved observing 20 German ETR users engaging with a digital PDF and a website designed in a participatory process. We collected data on their access to digital media, personal use and workarounds, and their interaction with digital design patterns. Tasks on the smartphone were completed mostly successfully, while only 50% could navigate a digital PDF. In both cases, visual cues played a significant role. Our findings contribute recommendations for beneficial digital design patterns and future research.

著者
Sabina Sieghart
University of Hasselt, Hasselt, Belgium
Björn Rohles
University of Luxembourg, Esch-sur-Alzette, Luxembourg
Ann Bessemans
University of Hasselt, Hasselt, Belgium
論文URL

https://doi.org/10.1145/3613904.3641911

動画
ChatDirector: Enhancing Video Conferencing with Space-Aware Scene Rendering and Speech-Driven Layout Transition
要旨

Remote video conferencing systems (RVCS) are widely adopted in personal and professional communication. However, they often lack the co-presence experience of in-person meetings. This is largely due to the absence of intuitive visual cues and clear spatial relationships among remote participants, which can lead to speech interruptions and loss of attention. This paper presents ChatDirector, a novel RVCS that overcomes these limitations by incorporating space-aware visual presence and speech-aware attention transition assistance. ChatDirector employs a real-time pipeline that converts participants' RGB video streams into 3D portrait avatars and renders them in a virtual 3D scene. We also contribute a decision tree algorithm that directs the avatar layouts and behaviors based on participants' speech states. We report on results from a user study (N=16) where we evaluated ChatDirector. The satisfactory algorithm performance and complimentary subject user feedback imply that ChatDirector significantly enhances communication efficacy and user engagement.

著者
Xun Qian
Purdue University, West Lafayette, Indiana, United States
Feitong Tan
Google, Mountain View, California, United States
Yinda Zhang
Google, Mountain View, California, United States
Brian Collins
Google, San Francisco, California, United States
David Kim
Google, Zurich, Switzerland
Alex Olwal
Google Inc., Mountain View, California, United States
Karthik Ramani
Purdue University, West Lafayette, Indiana, United States
Ruofei Du
Google, San Francisco, California, United States
論文URL

https://doi.org/10.1145/3613904.3642110

動画
COR Themes for Readability from Iterative Feedback
要旨

Digital reading applications give readers the ability to customize fonts, sizes, and spacings, all of which have been shown to improve the reading experience for readers from different demographics. However, tweaking these text features can be challenging, especially given their interactions on the final look and feel of the text. Our solution is to offer readers preset combinations of font, character, word and line spacing, which we bundle together into reading themes. We identify a recommended set of reading themes through data-driven design iterations with the crowd and experts. We show that after four design iterations, we converge on a set of three COR themes (Compact, Open, and Relaxed) that meet diverse readers' preferences, when evaluating the reading speeds, comprehension scores, and preferences of hundreds of readers with and without dyslexia, using crowdsourced experiments.

受賞
Honorable Mention
著者
Tianyuan Cai
Adobe Research, San Francisco, California, United States
Aleena Gertrudes. Niklaus
Adobe Inc., San Jose, California, United States
Bernard Kerr
Adobe, San Francisco, California, United States
Michael Kraley
Adobe, Lexington, Massachusetts, United States
Zoya Bylinskii
Adobe Research, Cambridge, Massachusetts, United States
論文URL

https://doi.org/10.1145/3613904.3642108

動画