The Illusion of Empathy? Notes on Displays of Emotion in Human-Computer Interaction

要旨

From ELIZA to Alexa, Conversational Agents (CAs) have been deliberately designed to elicit or project empathy. Although empathy can help technology better serve human needs, it can also be deceptive and potentially exploitative. In this work, we characterize empathy in interactions with CAs, highlighting the importance of distinguishing evocations of empathy between two humans from ones between a human and a CA. To this end, we systematically prompt CAs backed by large language models (LLMs) to display empathy while conversing with, or about, 65 distinct human identities, and also compare how different LLMs display or model empathy. We find that CAs make value judgments about certain identities, and can be encouraging of identities related to harmful ideologies (e.g., Nazism and xenophobia). Moreover, a computational approach to understanding empathy reveals that despite their ability to display empathy, CAs do poorly when interpreting and exploring a user's experience, contrasting with their human counterparts.

受賞
Honorable Mention
著者
Andrea Cuadra
Stanford University, Stanford, California, United States
Maria Wang
Stanford University, Stanford, California, United States
Lynn Andrea. Stein
Franklin W. Olin College of Engineering, Needham, Massachusetts, United States
Malte F. Jung
Cornell University, Ithaca, New York, United States
Nicola Dell
Cornell Tech, New York, New York, United States
Deborah Estrin
Cornell Tech, New York, New York, United States
James A.. Landay
Stanford University, Stanford, California, United States
論文URL

doi.org/10.1145/3613904.3642336

動画

会議: CHI 2024

The ACM CHI Conference on Human Factors in Computing Systems (https://chi2024.acm.org/)

セッション: Health and AI B

315
5 件の発表
2024-05-16 01:00:00
2024-05-16 02:20:00