LLM or Human? Perceptions of Trust and Quality in Research Summaries

要旨

Large Language Models (LLMs) are increasingly used to generate and edit scientific abstracts, yet their integration into academic writing raises questions about trust, quality, and disclosure. Despite growing adoption, little is known about how readers perceive LLM-generated summaries and how these perceptions influence evaluations of scientific work. This paper presents a mixed-methods survey experiment investigating whether readers with ML expertise can distinguish between human- and LLM-generated abstracts, how actual and perceived LLM involvement affects judgments of quality and trustworthiness, and what orientations readers adopt toward AI-assisted writing. Our findings show that participants struggle to reliably identify LLM-generated content, yet their beliefs about LLM involvement significantly shape their evaluations. Notably, abstracts edited by LLMs are rated more favorably than those written solely by humans or LLMs. We also identify three distinct reader orientations toward LLM-assisted writing, offering insights into evolving norms and informing policy around disclosure and acceptable use in scientific communication.

著者
Nil-Jana Akpinar
Amazon AWS AI/ML, seattle, Washington, United States
sandeep avula
Amazon AWS AI/ML, seattle, Washington, United States
Chia-Jung Lee
Amazon AWS AI/ML, seattle, Washington, United States
Brandon Dang
Amazon AWS AI/ML, seattle, Washington, United States
Kaza Razat
Amazon AWS AI/ML, seattle, Washington, United States
Vanessa G. Murdock
Amazon, Seattle, Washington, United States

会議: CHI 2026

ACM CHI Conference on Human Factors in Computing Systems

セッション: AI & Timing Matters

P1 - Room 129
7 件の発表
2026-04-13 20:15:00
2026-04-13 21:45:00