People often rely on heuristic reasoning when receiving algorithm advice, and this reliance leads to biased decisions that undermine the effectiveness of human-AI collaboration. Such bias persists even when individuals are given more time to deliberate or provided with more information about AI, as they may lack the awareness or ability to engage in systematic reasoning. In this paper, we explore how guided reflection may enhance decision-making performance in human-AI collaboration by prompting a systematic reasoning process. We conducted an experiment with 178 participants, comparing decision-making behavior across three conditions: AI, explainable AI (XAI), and XAI with reflection. The results demonstrate that reflection significantly reduced over-reliance on AI and improved decision accuracy. Individuals with a high need for cognition and a high perceived understanding of AI benefited more from reflection. Furthermore, our study uncovers distinct patterns of cognitive processing and belief adjustment across different experimental conditions. Our findings provide a practical strategy for fostering cognitive engagement and contribute to a deeper understanding of human cognitive processes in AI-assisted decision-making.
ACM CHI Conference on Human Factors in Computing Systems