Dialogues with AI Reduce Beliefs in Misinformation but Build No Lasting Discernment Skills

要旨

Given the growing prevalence of fake information, including increasingly realistic AI-generated news, there is an urgent need to train people to better evaluate and detect misinformation. While interactions with AI have been shown to durably reduce people's beliefs in false information, it is unclear whether these interactions also teach people the skills to discern false information themselves. We conducted a month-long study where 67 participants classified news headline-image pairs as real or fake, discussed their assessments with an AI system, followed by an unassisted evaluation of unseen news items to measure accuracy before, during, and after AI assistance. While AI assistance produced immediate improvements during AI-assisted sessions (+21\% average), participants' unassisted performance on new items declined significantly by 15.3\% in week 4 compared to week 0. These results indicate that while AI may help immediately, it may ultimately degrade long-term misinformation detection abilities.

受賞
Honorable Mention
著者
Anku Rani
Massachusetts Institute of Technology, Cambridge, Massachusetts, United States
Valdemar Danry
MIT, CAMBRIDGE, Massachusetts, United States
Paul Pu Liang
Massachusetts Institute of Technology, Cambridge, Massachusetts, United States
Andrew Lippman
Massachusetts Institute of Technology, Cambridge, Massachusetts, United States
Pattie Maes
MIT , Cambridge, Massachusetts, United States

会議: CHI 2026

ACM CHI Conference on Human Factors in Computing Systems

セッション: Human Behavior with AI Systems

M2 - Room M211/212
7 件の発表
2026-04-14 20:15:00
2026-04-14 21:45:00