DualVoice: Speech Interaction that Discriminates between Normal and Whispered Voice Input

要旨

Interactions based on automatic speech recognition (ASR) have become widely used, with speech input being increasingly utilized to create documents. However, as there is no easy way to distinguish between commands being issued and text required to be input in speech, misrecognitions are difficult to identify and correct, meaning that documents need to be manually edited and corrected. The input of symbols and commands is also challenging because these may be misrecognized as text letters. To address these problems, this study proposes a speech interaction method called DualVoice, by which commands can be input in a whispered voice and letters in a normal voice. The proposed method does not require any specialized hardware other than a regular microphone, enabling a complete hands-free interaction. The method can be used in a wide range of situations where speech recognition is already available, ranging from text input to mobile/wearable computing. Two neural networks were designed in this study, one for discriminating normal speech from whispered speech, and the second for recognizing whisper speech. A prototype of a text input system was then developed to show how normal and whispered voice can be used in speech text input. Other potential applications using DualVoice are also discussed.

著者
Jun Rekimoto
The University of Tokyo, Tokyo, Japan
論文URL

https://doi.org/10.1145/3526113.3545685

会議: UIST 2022

The ACM Symposium on User Interface Software and Technology

セッション: Modeling and Intent

6 件の発表
2022-11-02 23:30:00
2022-11-03 01:00:00