Can Voice Assistants Be Microaggressors? Cross-Race Psychological Responses to Failures of Automatic Speech Recognition

要旨

Language technologies have a racial bias, committing greater errors for Black users than for white users. However, little work has evaluated what effect these disparate error rates have on users themselves. The present study aims to understand if speech recognition errors in human-computer interactions may mirror the same effects as misunderstandings in interpersonal cross-race communication. In a controlled experiment (N=108), we randomly assigned Black and white participants to interact with a voice assistant pre-programmed to exhibit a high versus low error rate. Results revealed that Black participants in the high error rate condition, compared to Black participants in the low error rate condition, exhibited significantly higher levels of self-consciousness, lower levels of self-esteem and positive affect, and less favorable ratings of the technology. White participants did not exhibit this disparate pattern. We discuss design implications and the diverse research directions to which this initial study aims to contribute.

著者
Kimi Wenzel
Carnegie Mellon University, Pittsburgh, Pennsylvania, United States
Nitya Devireddy
Carnegie Mellon University, Pittsburgh, Pennsylvania, United States
Cam Davison
Carnegie Mellon University, Pittsburgh, Pennsylvania, United States
Geoff Kaufman
Carnegie Mellon University, Pittsburgh, Pennsylvania, United States
論文URL

https://doi.org/10.1145/3544548.3581357

会議: CHI 2023

The ACM CHI Conference on Human Factors in Computing Systems (https://chi2023.acm.org/)

セッション: Conversation, Communication & Collaborative AI

Hall E
6 件の発表
2023-04-27 18:00:00
2023-04-27 19:30:00