Acceptability of Speech and Silent Speech Input Methods in Private and Public


Silent speech input converts non-acoustic features like tongue and lip movements into text. It has been demonstrated as a promising input method on mobile devices and has been explored for a variety of audiences and contexts where the acoustic signal is unavailable (e.g., people with speech disorders) or unreliable (e.g., noisy environment). Though the method shows promise, very little is known about peoples' perceptions regarding using it. In this work, first, we conduct two user studies to explore users' attitudes towards the method with a particular focus on social acceptance and error tolerance. Results show that people perceive silent speech as more socially acceptable than speech input and are willing to tolerate more errors with it to uphold privacy and security. We then conduct a third study to identify a suitable method for providing real-time feedback on silent speech input. Results show users find an abstract feedback method effective and significantly more private and secure than a commonly used video feedback method.

Laxmi Pandey
University of California, Merced, Merced, California, United States
Khalad Hasan
University of British Columbia, Kelowna, British Columbia, Canada
Ahmed Sabbir. Arif
University of California, Merced, Merced, California, United States




会議: CHI 2021

The ACM CHI Conference on Human Factors in Computing Systems (

セッション: Meetings, Chats, and Speech

[A] Paper Room 15, 2021-05-10 17:00:00~2021-05-10 19:00:00 / [B] Paper Room 15, 2021-05-11 01:00:00~2021-05-11 03:00:00 / [C] Paper Room 15, 2021-05-11 09:00:00~2021-05-11 11:00:00
Paper Room 15
11 件の発表
2021-05-10 17:00:00
2021-05-10 19:00:00