Tactile Emotions: Multimodal Affective Captioning with Haptics Improves Narrative Engagement for d/Deaf and Hard-of-Hearing Viewers

要旨

This paper explores a multimodal approach for translating emotional cues present in speech, designed with Deaf and Hard-of-Hearing (DHH) individuals in mind. Prior work has focused on visual cues applied to captions, successfully conveying whether a speaker's words have a negative or positive tone (valence), but with mixed results regarding the intensity (arousal) of these emotions. We propose a novel method using haptic feedback to communicate a speaker's arousal levels through vibrations on a wrist-worn device. In a formative study with 16 DHH participants, we tested six haptic patterns and found that participants preferred single per-word vibrations at 75 Hz to encode arousal. In a follow-up study with 27 DHH participants, this pattern was paired with visual cues, and narrative engagement with audio-visual content was measured. Results indicate that combining haptics with visuals significantly increased engagement compared to a conventional captioning baseline and a visuals-only affective captioning style.

著者
Caluã de Lacerda Pataca
Rochester Institute of Technology, Rochester, New York, United States
Saad Hassan
Tulane University, New Orleans, Louisiana, United States
Lloyd May
Stanford University, Palo Alto, California, United States
Michelle M. Olson
Rochester Institute of Technology, Rochester, New York, United States
Toni D'aurio
Rochester Institute of Technology, Rochester, New York, United States
Roshan L. Peiris
Rochester Institute of Technology, Rochester, New York, United States
Matt Huenerfauth
Rochester Institute of Technology, Rochester, New York, United States
DOI

10.1145/3706598.3713304

論文URL

https://dl.acm.org/doi/10.1145/3706598.3713304

動画

会議: CHI 2025

The ACM CHI Conference on Human Factors in Computing Systems (https://chi2025.acm.org/)

セッション: Assistive Technologies

G314+G315
7 件の発表
2025-05-01 01:20:00
2025-05-01 02:50:00
日本語まとめ
読み込み中…