Empowering Calibrated (Dis-)Trust in Conversational Agents: A User Study on the Persuasive Power of Limitation Disclaimers vs. Authoritative Style

要旨

While conversational agents based on Large Language Models (LLMs) can drive progress in many domains, they are prone to generating faulty information. To ensure an efficient, safe, and satisfactory user experience maximizing benefits of these systems, users must be empowered to judge the reliability of system outputs. In this, both disclaimers and agents' communicative style are pivotal design instances. In an online study with 594 participants, we investigated how these affect users' trust and a mock-up agent's persuasiveness, based on an established framework from social psychology. While prior information on potential inaccuracies or faulty information did not affect trust, an authoritative communicative style elicited more trust. Also, a trusted agent was more persuasive resulting in more positive attitudes regarding the subject of the conversation. Results imply that disclaimers on agents' limitations fail to effectively alter users' trust but can be supported by appropriate communicative style during interaction.

著者
Luise Metzger
Ulm University, Ulm, Germany
Linda Miller
Ulm University, Ulm, Germany
Martin Baumann
Ulm University, Ulm, Germany
Johannes Kraus
Ulm University, Ulm, Germany
論文URL

doi.org/10.1145/3613904.3642122

動画

会議: CHI 2024

The ACM CHI Conference on Human Factors in Computing Systems (https://chi2024.acm.org/)

セッション: Remote Presentations: Highlight on Chatbots and LLMs

Remote Sessions
4 件の発表
2024-05-15 18:00:00
2024-05-16 02:20:00