Show, Not Tell: A Human-AI Collaborative Approach for Designing Sound Awareness Systems

要旨

Current sound recognition systems for deaf and hard of hearing (DHH) people identify sound sources or discrete events. However, these systems do not distinguish similar sounding events (e.g., a patient monitor beep vs. a microwave beep). In this paper, we introduce HACS, a novel futuristic approach to designing human-AI sound awareness systems. HACS assigns AI models to identify sounds based on their characteristics (e.g., a beep) and prompts DHH users to use this information and their contextual knowledge (e.g., “I am in a kitchen”) to recognize sound events (e.g., a microwave). As a first step for implementing HACS, we articulated a sound taxonomy that classifies sounds based on sound characteristics using insights from a multi-phased research process with people of mixed hearing abilities. We then performed a qualitative (with 9 DHH people) and a quantitative (with a sound recognition model) evaluation. Findings demonstrate the initial promise of HACS for designing accurate and reliable human-AI systems.

著者
Jeremy Zhengqi. Huang
University of Michigan, Ann Arbor, Michigan, United States
Reyna Wood
University of Michigan, Ann Arbor, Michigan, United States
Hriday Chhabria
University of Michigan, Ann Arbor, Michigan, United States
Dhruv Jain
University of Michigan, Ann Arbor, Michigan, United States
論文URL

doi.org/10.1145/3613904.3642062

動画

会議: CHI 2024

The ACM CHI Conference on Human Factors in Computing Systems (https://chi2024.acm.org/)

セッション: Sound Interaction

323C
4 件の発表
2024-05-13 20:00:00
2024-05-13 21:20:00