Cracking the Case Together: Role Perceptions in Human-AI Mystery Solving Dialogues

要旨

Large Language Models (LLMs) aim to mimic a natural form of human conversation, likely contributing to an anthropomorphic perception of AI in contrast to conventional human-computer interfaces. Our study explores human-AI conversations and humans’ perception of their counterpart in a collaborative mystery solving task with Anthropic’s Claude 3.5 Sonnet v2 model. We collected self-report data on participants’ perception of the interaction, measured task performance, and analyzed conversational dynamics using LLM-based emotion coding. We found that humans’ perception of AI, ranging from that of a teammate or colleague to a tool, did not necessarily impact performance in mystery solving, but correlated with aspects of the interaction itself. When participants perceived the AI as a teammate or colleague, they felt a stronger sense of team cohesion and their conversations were more collaborative, with more positive emotions. These findings may help practitioners design human-AI interfaces that foster positive interactions without endangering performance.

著者
Karin Breckner
University of Applied Sciences Upper Austria, Hagenberg, Austria
Johannes Schönböck
University of Applied Sciences Upper Austria, Hagenberg, Austria
Carrie Kovacs
University of Applied Sciences Upper Austria, Hagenberg, Austria
Frederik Hirschmann
University of Applied Sciences Upper Austria, Hagenberg, Austria
Thomas Neumayr
University of Applied Sciences Upper Austria, Hagenberg, Austria
Eva Reyskens
University of Applied Sciences Upper Austria, Hagenberg, Austria
Mirjam Augstein
University of Applied Sciences Upper Austria, Hagenberg, Austria

会議: CHI 2026

ACM CHI Conference on Human Factors in Computing Systems

セッション: Synergistic with AI

P1 - Room 125
6 件の発表
2026-04-15 20:15:00
2026-04-15 21:45:00