Synthetic Lies: Understanding AI-Generated Misinformation and Evaluating Algorithmic and Human Solutions

要旨

Large language models have abilities in creating high-volume human-like texts and can be used to generate persuasive misinformation. However, the risks remain under-explored. To address the gap, this work first examined characteristics of AI-generated misinformation (AI-misinfo) compared with human creations, and then evaluated the applicability of existing solutions. We compiled human-created COVID-19 misinformation and abstracted it into narrative prompts for a language model to output AI-misinfo. We found significant linguistic differences within human-AI pairs, and patterns of AI-misinfo in enhancing details, communicating uncertainties, drawing conclusions, and simulating personal tones. While existing models remained capable of classifying AI-misinfo, a significant performance drop compared to human-misinfo was observed. Results suggested that existing information assessment guidelines had questionable applicability, as AI-misinfo tended to meet criteria in evidence credibility, source transparency, and limitation acknowledgment. We discuss implications for practitioners, researchers, and journalists, as AI can create new challenges to the societal problem of misinformation.

受賞
Honorable Mention
著者
Jiawei Zhou
Georgia Institute of Technology, Atlanta, Georgia, United States
Yixuan Zhang
Georgia Institute of Technology, Atlanta, Georgia, United States
Qianni Luo
Ohio University, Athens, Ohio, United States
Andrea G. Parker
Georgia Tech, Atlanta, Georgia, United States
Munmun De Choudhury
Georgia Institute of Technology, Atlanta, Georgia, United States
論文URL

https://doi.org/10.1145/3544548.3581318

動画

会議: CHI 2023

The ACM CHI Conference on Human Factors in Computing Systems (https://chi2023.acm.org/)

セッション: Large Language Models

Hall C
6 件の発表
2023-04-25 23:30:00
2023-04-26 00:55:00