ProtoSound: A Personalized and Scalable Sound Recognition System for Deaf and Hard-of-Hearing Users

要旨

Recent advances have enabled automatic sound recognition systems for deaf and hard of hearing (DHH) users on mobile devices. However, these tools use pre-trained, generic sound recognition models, which do not meet the diverse needs of DHH users. We introduce ProtoSound, an interactive system for customizing sound recognition models by recording a few examples, thereby enabling personalized and fine-grained categories. ProtoSound is motivated by prior work examining sound awareness needs of DHH people and by a survey we conducted with 472 DHH participants. To evaluate ProtoSound, we characterized performance on two real-world sound datasets, showing significant improvement over state-of-the-art (e.g., +9.7% accuracy on the first dataset). We then deployed ProtoSound's end-user training and real-time recognition through a mobile application and recruited 19 hearing participants who listened to the real-world sounds and rated the accuracy across 56 locations (e.g., homes, restaurants, parks). Results show that ProtoSound personalized the model on-device in real-time and accurately learned sounds across diverse acoustic contexts. We close by discussing open challenges in personalizable sound recognition, including the need for better recording interfaces and algorithmic improvements.

著者
Dhruv Jain
University of Washington, Seattle, Washington, United States
Khoa Huynh Anh. Nguyen
University of Washington, Seattle, Washington, United States
Steven M.. Goodman
University of Washington, Seattle, Washington, United States
Rachel Grossman-Kahn
University of Washington, Seattle, Washington, United States
Hung Ngo
University of Washington, Seattle, Washington, United States
Aditya Kusupati
University of Washington, Seattle, Washington, United States
Ruofei Du
Google, San Francisco, California, United States
Alex Olwal
Google Inc., Mountain View, California, United States
Leah Findlater
University of Washington, Seattle, Washington, United States
Jon E.. Froehlich
University of Washington, Seattle, Washington, United States
論文URL

https://dl.acm.org/doi/abs/10.1145/3491102.3502020

動画

会議: CHI 2022

The ACM CHI Conference on Human Factors in Computing Systems (https://chi2022.acm.org/)

セッション: Audio for Accessibility

395
4 件の発表
2022-05-03 01:15:00
2022-05-03 02:30:00