SpeechCompass: Enhancing Mobile Captioning with Diarization and Directional Guidance via Multi-Microphone Localization

要旨

Speech-to-text capabilities on mobile devices have proven helpful for hearing and speech accessibility, language translation, note-taking, and meeting transcripts. However, our foundational large-scale survey (n=263) shows that the inability to distinguish and indicate speaker direction makes them challenging in group conversations. SpeechCompass addresses this limitation through real-time, multi-microphone speech localization, where the direction of speech allows visual separation and guidance (e.g., arrows) in the user interface. We introduce efficient real-time audio localization algorithms and custom sound perception hardware, running on a low-power microcontroller with four integrated microphones, which we characterize in technical evaluations. Informed by a large-scale survey (n=494), we conducted an in-person study of group conversations with eight frequent users of mobile speech-to-text, who provided feedback on five visualization styles. The value of diarization and visualizing localization was consistent across participants, with everyone agreeing on the value and potential of directional guidance for group conversations.

受賞
Best Paper
著者
Artem Dementyev
Google Inc., Mountain View, California, United States
Dimitri Kanevsky
Google, Mountain View, California, United States
Samuel Yang
Google, Mountain View, California, United States
Mathieu Parvaix
Google Research, Mountain View, California, United States
Chiong Lai
Google, Mountain View, California, United States
Alex Olwal
Google Inc., Mountain View, California, United States
DOI

10.1145/3706598.3713631

論文URL

https://dl.acm.org/doi/10.1145/3706598.3713631

動画

会議: CHI 2025

The ACM CHI Conference on Human Factors in Computing Systems (https://chi2025.acm.org/)

セッション: Auditory UI

G402
7 件の発表
2025-04-28 20:10:00
2025-04-28 21:40:00
日本語まとめ
読み込み中…