While research on trust in human-AI interactions is gaining recognition, much work is conducted in lab settings that, therefore, lack ecological validity and often omit the trust development perspective. We investigated a real-world case in which logistics experts had worked with an AI system for several years (in some cases since its introduction). Through thematic analysis, three key themes emerged: First, although experts clearly point out AI system imperfections, they still showed to develop trust over time. Second, however, inconsistencies and frequent efforts to improve the AI system disrupted trust development, hindering control, transparency, and understanding of the system. Finally, despite the overall trustworthiness, experts overrode correct AI decisions to protect their colleagues’ well-being. By comparing our results with the latest trust research, we can confirm empirical work and contribute new perspectives, such as understanding the importance of human elements for trust development in human-AI scenarios.
https://dl.acm.org/doi/10.1145/3706598.3713099
The ACM CHI Conference on Human Factors in Computing Systems (https://chi2025.acm.org/)