Personal Validation Effect in LLMs: Positive AI Responses Bias Perceptions of Validity, Reliability, Personalization, and Usefulness of Fictitious Predictions

要旨

Large Language Models (LLMs) are becoming increasingly ubiquitous in daily life, impacting decision-making across various domains. A substantial body of prior work has shown that individuals tend to evaluate positive predictions more favorably than negative ones---a phenomenon often referred to as the personal validation effect---across various non-AI prediction sources. Building on this foundation, we extend this well-established psychological effect to the context of LLM-based predictions, examining how prediction valence influences users’ perceptions when the source is an AI system. We investigate how positive AI-generated responses affect perceived validity, personalization, reliability, and usefulness of chatbot predictions, even when those predictions are fictitious and pre-scripted. In a study of 238 participants, positive predictions were perceived as significantly more valid (36% increase), personalized (42% increase), reliable (27% increase), and useful (22% increase) than negative predictions. These findings demonstrate that the personal validation effect persists in interactions with LLMs and underscore the substantial role of prediction valence in shaping user perceptions, with important implications for the design and deployment of AI systems across diverse applications.

著者
Pat Pataranutaporn
Massachusetts Institute of Technology, Boston, Massachusetts, United States
Eunhae Lee
MIT Media Lab, Cambridge, Massachusetts, United States
Judith Amores
MIT, Cambridge, Massachusetts, United States
Pattie Maes
MIT , Cambridge, Massachusetts, United States

会議: CHI 2026

ACM CHI Conference on Human Factors in Computing Systems

セッション: AI in Practice

P1 - Room 122
7 件の発表
2026-04-15 18:00:00
2026-04-15 19:30:00