"Can LLMs Persuade Humans with Deception?": From a Deceptive Strategy Taxonomy to a Large-Scale Empirical Study

要旨

Beyond hallucinations, Large Language Models (LLMs) can craft deceptive arguments that erode users' critical thinking, posing a significant yet underexamined societal risk. To address this gap, we develop a taxonomy of eight deceptive persuasion strategies by integrating top-down rhetorical theory with a bottom-up analysis of 3,360 AI-generated messages by four LLM families and examining their effects on user perceptions. Through a large-scale user study (N=602) complemented by a think-aloud protocol, we found that participants were vulnerable to \textit{Information Manipulation} and \textit{Uncertainty Exploitation}, especially when a message contradicted their prior beliefs. Vulnerability was significantly higher for participants with low cognitive reflection, low topic knowledge, and low topic involvement. Qualitative analyses further revealed that participants were persuaded by the plausibility of an overall narrative even when they distrust specific details, interpreting deceptive outputs as logically framed information that broadens perspective. We discuss critical implications of these findings for the design of trustworthy AI systems, adaptive user interfaces, and targeted literacy education.

著者
Haein Yeo
Hanyang University, Seoul, Korea, Republic of
Seungwan Jin
Hanyang University, Seoul, Korea, Republic of
Taehyung Noh
Hanyang University, Seoul, Korea, Republic of
Yejin Shin
Telecommunications Technology Association, Seoul, Seoul, Korea, Republic of
Sangyeon Kang
Telecommunications Technology Association, Seoul, Seoul, Korea, Republic of
Sangwoo Heo
Naver, Seoul, Seoul, Korea, Republic of
Jiwon Chung
Naver, Seoul, Seoul, Korea, Republic of
Hwarim Hyun
NAVER, Seoul, Korea, Republic of
Kyungsik Han
Hanyang University, Seoul, Korea, Republic of
動画

会議: CHI 2026

ACM CHI Conference on Human Factors in Computing Systems

セッション: The Dark Sides of AI

Area 1 + 2 + 3: theatre
7 件の発表
2026-04-17 20:15:00
2026-04-17 21:45:00