Humans naturally seek to identify causes behind outcomes through causal attribution, yet Human-AI research often overlooks how users perceive causality behind AI decisions. We examine how this perceived locus of causality—internal or external to the AI—influences trust, and how decision stakes and outcome favourability moderate this relationship. Participants (N=192) engaged with AI-based decision-making scenarios operationalising varying loci of causality, stakes, and favourability, evaluating their trust in each AI. We find that internal attributions foster lower trust as participants perceive the AI to have high autonomy and decision-making responsibility. Conversely, external attributions portray the AI as merely "a tool" processing data, reducing its perceived agency and distributing responsibility, thereby boosting trust. Moreover, stakes moderate this relationship—external attributions foster even more trust in lower-risk, low-stakes scenarios. Our findings establish causal attribution as a crucial yet underexplored determinant of trust in AI, highlighting the importance of accounting for it when researching trust dynamics.
https://dl.acm.org/doi/10.1145/3706598.3713468
The ACM CHI Conference on Human Factors in Computing Systems (https://chi2025.acm.org/)