Beyond hallucinations, Large Language Models (LLMs) can craft deceptive arguments that erode users' critical thinking, posing a significant yet underexamined societal risk. To address this gap, we develop a taxonomy of eight deceptive persuasion strategies by integrating top-down rhetorical theory with a bottom-up analysis of 3,360 AI-generated messages by four LLM families and examining their effects on user perceptions. Through a large-scale user study (N=602) complemented by a think-aloud protocol, we found that participants were vulnerable to \textit{Information Manipulation} and \textit{Uncertainty Exploitation}, especially when a message contradicted their prior beliefs. Vulnerability was significantly higher for participants with low cognitive reflection, low topic knowledge, and low topic involvement. Qualitative analyses further revealed that participants were persuaded by the plausibility of an overall narrative even when they distrust specific details, interpreting deceptive outputs as logically framed information that broadens perspective. We discuss critical implications of these findings for the design of trustworthy AI systems, adaptive user interfaces, and targeted literacy education.
ACM CHI Conference on Human Factors in Computing Systems