A View on the Viewer: Gaze-Adaptive Captions for Videos

要旨

Subtitles play a crucial role in cross-lingual distribution of multimedia content and help communicate information where auditory content is not feasible (loud environments, hearing impairments, unknown languages). Established methods utilize text at the bottom of the screen, which may distract from the video. Alternative techniques place captions closer to related content (e.g., faces) but are not applicable to arbitrary videos such as documentations. Hence, we propose to leverage live gaze as indirect input method to adapt captions to individual viewing behavior. We implemented two gaze-adaptive methods and compared them in a user study (n=54) to traditional captions and audio-only videos. The results show that viewers with less experience with captions prefer our gaze-adaptive methods as they assist them in reading. Furthermore, gaze distributions resulting from our methods are closer to natural viewing behavior compared to the traditional approach. Based on these results, we provide design implications for gaze-adaptive captions.

受賞
Honorable Mention
キーワード
Eye Tracking
Gaze Input
Gaze-Responsive Display
Multimedia
Video Captions
Subtitles
著者
Kuno Kurzhals
ETH Zürich, Zürich, Switzerland
Fabian Göbel
ETH Zürich, Zürich, Switzerland
Katrin Angerbauer
University of Stuttgart, Stuttgart, Germany
Michael Sedlmair
University of Stuttgart, Stuttgart, Germany
Martin Raubal
ETH Zürich, Zürich, Switzerland
DOI

10.1145/3313831.3376266

動画

会議: CHI 2020

The ACM CHI Conference on Human Factors in Computing Systems

セッション: Look at me

Paper session
Paper
311 KAUA'I
2020-04-27 20:00:00
2020-04-27 21:15:00
日本語まとめ
読み込み中…