People with language impairments, such as aphasia, use a range of total communication strategies. These go beyond spoken language to include non-verbal utterances, props and gestures. The uptake of videoconferencing platforms necessitated by the Covid-19 pandemic means that people with aphasia now use these communication strategies online. However, no data exists on the impact of videoconferencing on communication for this population. Working with an aphasia charity that moved its conversation support sessions online, we investigated the experience of communication via a videoconferencing platform. We report a study which investigated this through: 1) observations of online conversation support sessions; 2) interviews with speech and language therapists and volunteers; and 3) interviews with people with aphasia. Our findings reveal the unique and creative ways that the charity and its members with aphasia adapted their communication to videoconferencing. We unpack specific, novel challenges relating to total communication via videoconferencing and the related impacts on social and privacy issues.
https://dl.acm.org/doi/abs/10.1145/3491102.3502017
Webtoon is a type of digital comics read online where readers can leave comments to share their thoughts on the story. While it has experienced a surge in popularity internationally, people with visual impairments cannot enjoy webtoon with the lack of an accessible format. While traditional image description practices can be adopted, resulting descriptions cannot preserve webtoons' unique values such as control over the reading pace and social engagement through comments. To improve the webtoon reading experience for BLV users, we propose Cocomix, an interactive webtoon reader that leverages comments into the design of novel webtoon interactions. Since comments can identify story highlights and provide additional context, we designed a system that provides 1) comments-based adaptive descriptions with selective access to details and 2) panel-anchored comments for easy access to relevant descriptive comments. Our evaluation (N=12) showed that Cocomix users could adapt the description for various needs and better utilize comments.
https://dl.acm.org/doi/abs/10.1145/3491102.3502081
Most people who are blind interact with social media content with the assistance of a screen reader, a software that converts text to speech. However, the language used in social media is well-known to contain several informal out-of-vocabulary words (e.g., abbreviations, wordplays, slang), many of which do not have corresponding standard pronunciations. The narration behavior of screen readers for such out-of-vocabulary words and the corresponding impact on the social media experience of blind screen reader users are still uncharted research territories. Therefore we seek to plug this knowledge gap by examining how current popular screen readers narrate different types of out-of-vocabulary words found on Twitter, and also, how the presence of such words in tweets influences the interaction behavior and comprehension of blind screen reader users. Our investigation showed that screen readers rarely autocorrect out-of-vocabulary words, and moreover they do not always exhibit ideal behavior for certain prolific types of out-of-vocabulary words such as acronyms and initialisms. We also observed that blind users often rely on tedious and taxing workarounds to comprehend actual meanings of out-of-vocabulary words. Informed by the observations, we finally discuss methods that can potentially reduce this interaction burden for blind users on social media.
https://dl.acm.org/doi/abs/10.1145/3491102.3501958
As video conferencing (VC) has become increasingly necessary for many aspects of daily life, many d/Deaf and hard of hearing people, particularly those who communicate via sign language (signers), face distinct accessibility barriers. To better understand the unique requirements for participating in VC using a visual-gestural language, such as ASL, and to identify practical design considerations for signer-inclusive videoconferencing, we conducted 12 interviews and four co-design sessions with a total of eight d/Deaf signers and eight ASL interpreters. We found that participants’ access needs regarding consuming information (e.g., visual clarity of signs), communicating (e.g., getting attention of others), and collaborating (e.g., working with interpreter teams) are not well-met on existing VC platforms. We share novel insights into attending and conducting signer-accessible video conferences, outline considerations for future VC design, and provide guidelines for conducting remote research with d/Deaf signers.
https://dl.acm.org/doi/abs/10.1145/3491102.3517488