Look Once to Hear: Target Speech Hearing with Noisy Examples

要旨

In crowded settings, the human brain can focus on speech from a target speaker, given prior knowledge of how they sound. We introduce a novel intelligent hearable system that achieves this capability, enabling target speech hearing to ignore all interfering speech and noise, but the target speaker. A naive approach is to require a clean speech example to enroll the target speaker. This is however not well aligned with the hearable application domain since obtaining a clean example is challenging in real world scenarios, creating a unique user interface problem. We present the first enrollment interface where the wearer looks at the target speaker for a few seconds to capture a single, short, highly noisy, binaural example of the target speaker. This noisy example is used for enrollment and subsequent speech extraction in the presence of interfering speakers and noise. Our system achieves a signal quality improvement of 7.01 dB using less than 5 seconds of noisy enrollment audio and can process 8 ms of audio chunks in 6.24 ms on an embedded CPU. Our user studies demonstrate generalization to real-world static and mobile speakers in previously unseen indoor and outdoor multipath environments. Finally, our enrollment interface for noisy examples does not cause performance degradation compared to clean examples, while being convenient and user-friendly. Taking a step back, this paper takes an important step towards enhancing the human auditory perception with artificial intelligence.

受賞
Honorable Mention
著者
Bandhav Veluri
University of Washington, SEATTLE, Washington, United States
Malek Itani
University of Washington, Seattle, Washington, United States
Tuochao Chen
Computer Science and Engineering, Seattle, Washington, United States
Takuya Yoshioka
IEEE, Redmond, Washington, United States
Shyamnath Gollakota
university of Washington, Seattle, Washington, United States
論文URL

https://doi.org/10.1145/3613904.3642057

動画

会議: CHI 2024

The ACM CHI Conference on Human Factors in Computing Systems (https://chi2024.acm.org/)

セッション: Assistive Interactions: Audio Interactions and Deaf and Hard of Hearing Users

313A
4 件の発表
2024-05-14 23:00:00
2024-05-15 00:20:00