Fostering Appropriate Reliance on Large Language Models: The Role of Explanations, Sources, and Inconsistencies

要旨

Large language models (LLMs) can produce erroneous responses that sound fluent and convincing, raising the risk that users will rely on these responses as if they were correct. Mitigating such overreliance is a key challenge. Through a think-aloud study in which participants use an LLM-infused application to answer objective questions, we identify several features of LLM responses that shape users' reliance: explanations (supporting details for answers), inconsistencies in explanations, and sources. Through a large-scale, pre-registered, controlled experiment (N=308), we isolate and study the effects of these features on users' reliance, accuracy, and other measures.We find that the presence of explanations increases reliance on both correct and incorrect responses. However, we observe less reliance on incorrect responses when sources are provided or when explanations exhibit inconsistencies. We discuss the implications of these findings for fostering appropriate reliance on LLMs.

受賞
Honorable Mention
著者
Sunnie S. Y. Kim
Princeton University, Princeton, New Jersey, United States
Jennifer Wortman Vaughan
Microsoft Research, New York, New York, United States
Q. Vera Liao
Microsoft Research, Montreal, Quebec, Canada
Tania Lombrozo
Princeton University, Princeton, New Jersey, United States
Olga Russakovsky
Princeton University, Princeton, New Jersey, United States
DOI

10.1145/3706598.3714020

論文URL

https://dl.acm.org/doi/10.1145/3706598.3714020

動画

会議: CHI 2025

The ACM CHI Conference on Human Factors in Computing Systems (https://chi2025.acm.org/)

セッション: Explainable AI

G303
7 件の発表
2025-04-29 01:20:00
2025-04-29 02:50:00
日本語まとめ
読み込み中…