Behavioral Indicators of Overreliance During Interaction with Conversational Language Models

要旨

LLMs are now embedded in a wide range of everyday scenarios. However, their inherent hallucinations risk hiding misinformation in fluent responses, raising concerns about overreliance on AI. Detecting overreliance is challenging, as it often arises in complex, dynamic contexts and cannot be easily captured by post-hoc task outcomes. In this work, we aim to investigate how users' behavioral patterns correlate with overreliance. We collected interaction logs from 77 participants working with an LLM injected plausible misinformation across three real-world tasks and we assessed overreliance by whether participants detected and corrected these errors. By semantically encoding and clustering segments of user interactions, we identified five behavioral patterns linked to overreliance: users with low overreliance show careful task comprehension and fine-grained navigation; users with high overreliance show frequent copy-paste, skipping initial comprehension, repeated LLM references, coarse locating, and accepting misinformation despite hesitation. We discuss design implications for mitigation.

著者
Chang Liu
Tsinghua University, Beijing, China
Qinyi Zhou
Hong Kong University of Science and Technology, Hong Kong, China
Xinjie Shen
Georgia Institute of Technology , Atlanta , Georgia, United States
Xingyu Bruce. Liu
UCLA, Los Angeles, California, United States
Tongshuang Wu
Carnegie Mellon University, Pittsburgh, Pennsylvania, United States
Xiang 'Anthony' Chen
UCLA, Los Angeles, California, United States

会議: CHI 2026

ACM CHI Conference on Human Factors in Computing Systems

セッション: Human Behavior with AI Systems

M2 - Room M211/212
7 件の発表
2026-04-14 20:15:00
2026-04-14 21:45:00