When AI Rewrites the News: How Sentiment, Framing, and LLM Disclosure Shape Perceptions

要旨

Public concern over media-driven polarization and the rise of AI-modified news has sparked interest in how sentiment and framing shape perceptions. This study examines variations in sentiment (neutral vs. extreme) and framing (balanced vs. one-sided) in LLM-transformed news, along with disclosure of LLM involvement, to assess effects on readers’ emotions, perceptions, and credibility judgments. In a 2×2 between-subjects experiment (≈180 U.S. participants) plus a baseline control (45), articles were adapted from real news and transformed with LLMs. Results show extreme sentiment worsened outcomes, heightening negative emotions and lowering trustworthiness, while framing exerted more nuanced effects. Balanced news articles with extreme sentiment elicited amplified perceptions of bias and surprise consistent with the Hostile Media Effect, where balanced coverage appears biased due to amplified opposing viewpoints. Disclosure of LLM involvement modestly improved trustworthiness without undermining fairness or credibility. Overall findings highlight the need for transparent, user-facing interventions and editorial oversight in AI-mediated journalism.

著者
Prerana Khatiwada
University of Delaware, Newark, Delaware, United States
Varun Pappu
University of Delaware, Newark, Delaware, United States
Benjamin E.. Bagozzi
University of Delaware, Newark, Delaware, United States
Matthew Louis. Mauriello
University of Delaware, Newark, Delaware, United States
動画

会議: CHI 2026

ACM CHI Conference on Human Factors in Computing Systems

セッション: Quantifying the Algorithmic Lens

P1 - Room 131
7 件の発表
2026-04-13 20:15:00
2026-04-13 21:45:00