Haptic-Captioning: Using Audio-Haptic Interfaces to Enhance Speaker Indication in Real Time Captions for Deaf and Hard-of-Hearing Viewers

要旨

Captions make the audio content of videos accessible and understandable for deaf or hard-of-hearing people (DHH). However, in real-time captioning scenarios, captions alone can be challenging for DHH users to identify the active speaker in real time in multiple-speaker scenarios. To enhance the accessibility of real-time captioning, we propose Haptic-Captioning which provides real-time vibration feedback on the wrist by directly translating the sound of content into vibrations. We conducted three experiments to examine: (1) the haptic perception (Preliminary Study), (2) the feasibility of the haptic modality along with real-time and non-real-time visual captioning methods (Study 1), and (3) the user experience of using the Haptic-Captioning system in different media contexts (Study 2). Our results highlight that the Haptic-Captioning complements visual captions by improving caption readability, maintaining media engagement, enhancing understanding of emotions, and assisting speaker indication in real-time captioning scenarios. Furthermore, we discuss design implications for the future development of Haptic-Captioning.

著者
Yiwen Wang
University of Maryland, College Park, Maryland, United States
Ziming Li
Rochester Institute of Technology, Rochester, New York, United States
Pratheep Kumar Chelladurai
Rochester Institute of Technology, Rochester, New York, United States
Wendy Dannels
Rochester Institute of Technology, Rochester, New York, United States
Tae Oh
Rochester Institute of Technology, Rochester, New York, United States
Roshan L. Peiris
Rochester Institute of Technology, Rochester, New York, United States
論文URL

https://doi.org/10.1145/3544548.3581076

動画

会議: CHI 2023

The ACM CHI Conference on Human Factors in Computing Systems (https://chi2023.acm.org/)

セッション: VR / AR/ Videoconferencing for Accessibility

Room Y05+Y06
6 件の発表
2023-04-26 23:30:00
2023-04-27 00:55:00