Large Language Models (LLMs) aim to mimic a natural form of human conversation, likely contributing to an anthropomorphic perception of AI in contrast to conventional human-computer interfaces. Our study explores human-AI conversations and humans’ perception of their counterpart in a collaborative mystery solving task with Anthropic’s Claude 3.5 Sonnet v2 model. We collected self-report data on participants’ perception of the interaction, measured task performance, and analyzed conversational dynamics using LLM-based emotion coding. We found that humans’ perception of AI, ranging from that of a teammate or colleague to a tool, did not necessarily impact performance in mystery solving, but correlated with aspects of the interaction itself. When participants perceived the AI as a teammate or colleague, they felt a stronger sense of team cohesion and their conversations were more collaborative, with more positive emotions. These findings may help practitioners design human-AI interfaces that foster positive interactions without endangering performance.
ACM CHI Conference on Human Factors in Computing Systems