Public concern over media-driven polarization and the rise of AI-modified news has sparked interest in how sentiment and framing shape perceptions. This study examines variations in sentiment (neutral vs. extreme) and framing (balanced vs. one-sided) in LLM-transformed news, along with disclosure of LLM involvement, to assess effects on readers’ emotions, perceptions, and credibility judgments. In a 2×2 between-subjects experiment (≈180 U.S. participants) plus a baseline control (45), articles were adapted from real news and transformed with LLMs. Results show extreme sentiment worsened outcomes, heightening negative emotions and lowering trustworthiness, while framing exerted more nuanced effects. Balanced news articles with extreme sentiment elicited amplified perceptions of bias and surprise consistent with the Hostile Media Effect, where balanced coverage appears biased due to amplified opposing viewpoints. Disclosure of LLM involvement modestly improved trustworthiness without undermining fairness or credibility. Overall findings highlight the need for transparent, user-facing interventions and editorial oversight in AI-mediated journalism.
ACM CHI Conference on Human Factors in Computing Systems