Weaving Sound Information to Support Real-Time Sensemaking of Auditory Environments: Co-Designing with a DHH User

要旨

Current AI sound awareness systems can provide deaf and hard of hearing people with information about sounds, including discrete sound sources and transcriptions. However, synthesizing AI outputs based on DHH people’s ever-changing intents in complex auditory environments remains a challenge. In this paper, we describe the co-design process of SoundWeaver, a sound awareness system prototype that dynamically weaves AI outputs from different AI models based on users’ intents and presents synthesized information through a heads-up display. Adopting a Research through Design perspective, we created SoundWeaver with one DHH co-designer, adapting it to his personal contexts and goals (e.g., cooking at home and chatting in a game store). Through this process, we present design implications for the future of “intent-driven” AI systems for sound accessibility.

著者
Jeremy Zhengqi. Huang
University of Michigan, Ann Arbor, Michigan, United States
Jaylin Herskovitz
University of Michigan, Ann Arbor, Michigan, United States
Liang-Yuan Wu
University of Michigan, Ann Arbor, Michigan, United States
Cecily Morrison
Microsoft Research , Cambridge, United Kingdom
Dhruv Jain
University of Michigan, Ann Arbor, Michigan, United States
DOI

10.1145/3706598.3714268

論文URL

https://dl.acm.org/doi/10.1145/3706598.3714268

動画

会議: CHI 2025

The ACM CHI Conference on Human Factors in Computing Systems (https://chi2025.acm.org/)

セッション: Auditory UI

G402
7 件の発表
2025-04-28 20:10:00
2025-04-28 21:40:00
日本語まとめ
読み込み中…