A Mixed-Methods Approach to Understanding User Trust after Voice Assistant Failures

要旨

Despite huge gains in performance in natural language understanding via large language models in recent years, voice assistants still often fail to meet user expectations. In this study, we conducted a mixed-methods analysis of how voice assistant failures affect users' trust in their voice assistants. To illustrate how users have experienced these failures, we contribute a crowdsourced dataset of 199 voice assistant failures, categorized across 12 failure sources. Relying on interview and survey data, we find that certain failures, such as those due to overcapturing users' input, derail user trust more than others. We additionally examine how failures impact users' willingness to rely on voice assistants for future tasks. Users often stop using their voice assistants for specific tasks that result in failures for a short period of time before resuming similar usage. We demonstrate the importance of low stakes tasks, such as playing music, towards building trust after failures.

受賞
Honorable Mention
著者
Amanda Baughan
University of Washington, Seattle, Washington, United States
Xuezhi Wang
Google Brain, New York, New York, United States
ARIEL LIU
Google, Mountain View, California, United States
Allison Mercurio
Google, Mountain View, California, United States
Jilin Chen
Google, Mountain View, California, United States
Xiao Ma
Cornell Tech, New York, New York, United States
論文URL

https://doi.org/10.1145/3544548.3581152

動画

会議: CHI 2023

The ACM CHI Conference on Human Factors in Computing Systems (https://chi2023.acm.org/)

セッション: AI Trust, Transparency and Fairness

Room Y05+Y06
6 件の発表
2023-04-25 20:10:00
2023-04-25 21:35:00