Human-like behavior in Artificial Intelligence (AI) increasingly affects human–AI interaction, leading users to attribute consciousness to these systems. Yet, the factors shaping how such attributions arise remain largely unexplored. We report findings from an online survey (N=553) with participants primarily consisting of academics from formal sciences, natural sciences, and humanities, whose educational backgrounds provide more accurate mental models within their field of study, alongside participants from diverse backgrounds. Respondents evaluated their perceptions of consciousness (self-defined) in Large Language Models (LLMs) they previously interacted with, consciousness in future AI, and related ethical considerations. The results show that, across groups, around half of the participants attributed some degree of consciousness to LLMs. Individual traits such as gender, as well as participants’ conceptual positions regarding consciousness and its link to intelligence, influence consciousness perceptions, outweighing the effects of technical knowledge or system transparency. Beyond shaping academic discussions, these perspectives inform how AI is designed, governed, and integrated into everyday interactions.
ACM CHI Conference on Human Factors in Computing Systems