Previous research underscored the potential of danmaku: a text-based commenting feature on videos for engaging hearing audiences. However, many Deaf and hard-of-hearing (DHH) users prioritize American Sign Language (ASL) over English. To improve inclusivity, we introduce Signmaku, a commenting mechanism that uses ASL as a sign language version of danmaku. Through a need-finding study (N=12) and a within-subject experiment (N=20), we evaluated three design styles: real human faces, cartoon-like, and robotic depictions. We found that cartoon signmaku not only provided entertainment but also prompted participants to create and share ASL comments with fewer privacy concerns compared to the other designs. Conversely, the robotic design's limited accuracy in conveying hand movements and facial expressions increased cognitive demands. Realist signmaku elicited the lowest cognitive load and was the easiest to understand among all three types. Our findings offer unique design implications for leveraging generative AI to create signmaku comments, enhancing co-learning experiences for DHH users.
https://doi.org/10.1145/3613904.3642287
This paper presents a mixed methods study on how deaf, hard of hearing and hearing viewers perceive live TV caption quality with captioned video stimuli designed to mirror TV captioning experiences. To assess caption quality, we used four commonly-used quality metrics focusing on accuracy: word error rate, weighted word error rate, automated caption evaluation (ACE), and its successor ACE2. We calculated the correlation between the four quality metrics and viewer ratings for subjective quality and found that the correlation was weak, revealing that other factors besides accuracy affect user ratings. Additionally, even high-quality captions are perceived to have problems, despite controlling for confounding factors. Qualitative analysis of viewer comments revealed three major factors affecting their experience: Errors within captions, difficulty in following captions, and caption appearance. The findings raise questions as to how objective caption quality metrics can be reconciled with the user experience across a diverse spectrum of viewers.
https://doi.org/10.1145/3613904.3641988
With the recent advancements in intelligent personal assistants (IPAs), their popularity is rapidly increasing when it comes to utilizing Automatic Speech Recognition within households. In this study, we used a Wizard-of-Oz methodology to evaluate and compare the usability of American Sign Language (ASL), Tap to Alexa, and smart home apps among 23 deaf participants within a limited-domain smart home environment. Results indicate a slight usability preference for ASL. Linguistic analysis of the participants' signing reveals a diverse range of expressions and vocabulary as they interacted with IPAs in the context of a restricted-domain application. On average, deaf participants exhibited a vocabulary of 47 +/- 17 signs with an additional 10 +/- 7 fingerspelled words, for a total of 246 different signs and 93 different fingerspelled words across all participants. We discuss the implications for the design of limited-vocabulary applications as a stepping-stone toward general-purpose ASL recognition in the future.
https://doi.org/10.1145/3613904.3642094
High-quality closed captioning of both speech and non-speech elements (e.g., music, sound effects, manner of speaking, and speaker identification) is essential for the accessibility of video content, especially for d/Deaf and hard-of-hearing individuals. While many regions have regulations mandating captioning for television and movies, a regulatory gap remains for the vast amount of web-based video content, including the staggering 500+ hours uploaded to YouTube every minute. Advances in automatic speech recognition have bolstered the presence of captions on YouTube. However, the technology has notable limitations, including the omission of many non-speech elements, which are often crucial for understanding content narratives. This paper examines the contemporary and historical state of non-speech information (NSI) captioning on YouTube through the creation and exploratory analysis of a dataset of over 715k videos. We identify factors that influence NSI caption practices and suggest avenues for future research to enhance the accessibility of online video content.
https://doi.org/10.1145/3613904.3642162
We present a group autoethnography detailing a hearing student's journey in adopting communication technologies at a mixed-hearing ability summer research camp. Our study focuses on how this student, a research assistant with emerging American Sign Language (ASL) skills, (in)effectively communicates with deaf and hard-of-hearing (DHH) peers and faculty during the ten-week program. The DHH members also reflected on their communication with the hearing student. We depict scenarios and analyze the (in)effectiveness of how emerging technologies like live automatic speech recognition (ASR) and typing are utilized to facilitate communication. We outline communication strategies to engage everyone with diverse signing skills in conversations - \textit{directing visual attention}, \textit{pause-for-attention-and-proceed}, and \textit{back-channeling via expressive body}. These strategies promote inclusive collaboration and leverage technology advancements. Furthermore, we delve into the factors that have motivated individuals to embrace more inclusive communication practices and provide design implications for accessible communication technologies within the mixed-hearing ability context.
https://doi.org/10.1145/3613904.3642017