Deceptive Explanations by Large Language Models Lead People to Change their Beliefs About Misinformation More Often than Honest Explanations

要旨

Advanced Artificial Intelligence (AI) systems, specifically large language models (LLMs), have the capability to generate not just misinformation, but also deceptive explanations that can justify and propagate false information and discredit true information. We examined the impact of deceptive AI generated explanations on individuals' beliefs in a pre-registered online experiment with 11,780 observations from 589 participants. We found that in addition to being more persuasive than accurate and honest explanations, AI-generated deceptive explanations can significantly amplify belief in false news headlines and undermine true ones as compared to AI systems that simply classify the headline incorrectly as being true/false. Moreover, our results show that logically invalid explanations are deemed less credible - diminishing the effects of deception. This underscores the importance of teaching logical reasoning and critical thinking skills to identify logically invalid arguments, fostering greater resilience against advanced AI-driven misinformation.

受賞
Honorable Mention
著者
Valdemar Danry
MIT, CAMBRIDGE, Massachusetts, United States
Pat Pataranutaporn
Massachusetts Institute of Technology, Boston, Massachusetts, United States
Matthew Groh
Northwestern, Evanston, Illinois, United States
Ziv Epstein
Stanford University, Stanford, California, United States
DOI

10.1145/3706598.3713408

論文URL

https://dl.acm.org/doi/10.1145/3706598.3713408

動画

会議: CHI 2025

The ACM CHI Conference on Human Factors in Computing Systems (https://chi2025.acm.org/)

セッション: Shaping Cognitive Processes

G416+G417
7 件の発表
2025-05-01 01:20:00
2025-05-01 02:50:00
日本語まとめ
読み込み中…