Acceptability of Speech and Silent Speech Input Methods in Private and Public

Abstract

Silent speech input converts non-acoustic features like tongue and lip movements into text. It has been demonstrated as a promising input method on mobile devices and has been explored for a variety of audiences and contexts where the acoustic signal is unavailable (e.g., people with speech disorders) or unreliable (e.g., noisy environment). Though the method shows promise, very little is known about peoples' perceptions regarding using it. In this work, first, we conduct two user studies to explore users' attitudes towards the method with a particular focus on social acceptance and error tolerance. Results show that people perceive silent speech as more socially acceptable than speech input and are willing to tolerate more errors with it to uphold privacy and security. We then conduct a third study to identify a suitable method for providing real-time feedback on silent speech input. Results show users find an abstract feedback method effective and significantly more private and secure than a commonly used video feedback method.

Authors
Laxmi Pandey
University of California, Merced, Merced, California, United States
Khalad Hasan
University of British Columbia, Kelowna, British Columbia, Canada
Ahmed Sabbir. Arif
University of California, Merced, Merced, California, United States
DOI

10.1145/3411764.3445430

Paper URL

https://doi.org/10.1145/3411764.3445430

Video

Conference: CHI 2021

The ACM CHI Conference on Human Factors in Computing Systems (https://chi2021.acm.org/)

Session: Meetings, Chats, and Speech

[A] Paper Room 15, 2021-05-10 17:00:00~2021-05-10 19:00:00 / [B] Paper Room 15, 2021-05-11 01:00:00~2021-05-11 03:00:00 / [C] Paper Room 15, 2021-05-11 09:00:00~2021-05-11 11:00:00
Paper Room 15
11 items in this session
2021-05-10 08:00:00
2021-05-10 10:00:00
Japanese summary
Loading...