Good Performance Isn't Enough to Trust AI: Lessons from Logistics Experts on their Long-Term Collaboration with an AI Planning System

要旨

While research on trust in human-AI interactions is gaining recognition, much work is conducted in lab settings that, therefore, lack ecological validity and often omit the trust development perspective. We investigated a real-world case in which logistics experts had worked with an AI system for several years (in some cases since its introduction). Through thematic analysis, three key themes emerged: First, although experts clearly point out AI system imperfections, they still showed to develop trust over time. Second, however, inconsistencies and frequent efforts to improve the AI system disrupted trust development, hindering control, transparency, and understanding of the system. Finally, despite the overall trustworthiness, experts overrode correct AI decisions to protect their colleagues’ well-being. By comparing our results with the latest trust research, we can confirm empirical work and contribute new perspectives, such as understanding the importance of human elements for trust development in human-AI scenarios.

著者
Patricia K.. Kahr
Eindhoven University of Technology, Eindhoven, Netherlands
Gerrit Rooks
Eindhoven University of Technology, Eindhoven, Netherlands
Chris Snijders
Eindhoven University of Technoloy, Eindhoven, Netherlands
Martijn C.. Willemsen
Jheronimus Academy of Data Science, Den Bosch, Netherlands
DOI

10.1145/3706598.3713099

論文URL

https://dl.acm.org/doi/10.1145/3706598.3713099

動画

会議: CHI 2025

The ACM CHI Conference on Human Factors in Computing Systems (https://chi2025.acm.org/)

セッション: Trust and Responsibility in AI

G302
7 件の発表
2025-04-30 20:10:00
2025-04-30 21:40:00
日本語まとめ
読み込み中…