Presenting Large Language Models as Companions Affects What Mental Capacities People Attribute to Them

要旨

How might messages about large language models (LLMs) found in public discourse influence the way people think about and interact with these models? To explore this question, we randomly assigned participants (N = 470) to watch short informational videos presenting LLMs as either machines, tools, or companions---or to watch no video. We then assessed how strongly they believed LLMs to possess various mental capacities, such as the ability to have intentions or remember things. We found that participants who watched video messages presenting LLMs as companions reported believing that LLMs more fully possessed these capacities than did participants in other groups. In a follow-up study (N = 604), we replicated these findings and found nuanced effects on how these videos also impact people's reliance on LLM-generated responses when seeking out factual information. Together, these studies suggest that messages about LLMs---beyond technical advances---may shape what people believe about these systems and how they rely on LLM-generated responses.

著者
Allison Chen
Princeton Univeristy, Princeton, New Jersey, United States
Sunnie S. Y. Kim
Apple, Seattle, Washington, United States
Angel Nathaniel. Franyutti-Cintron
Princeton University, Princeton, New Jersey, United States
Amaya Dharmasiri
Princeton University, Princeton, New Jersey, United States
Kushin Mukherjee
Stanford University, Stanford, California, United States
Olga Russakovsky
Princeton University, Princeton, New Jersey, United States
Judith E.. Fan
Stanford University, Stanford, California, United States

会議: CHI 2026

ACM CHI Conference on Human Factors in Computing Systems

セッション: Modeling Minds and Mentalities

P1 - Room 128
6 件の発表
2026-04-17 20:15:00
2026-04-17 21:45:00