Human Reliance on Machine Learning Models When Performance Feedback is Limited: Heuristics and Risks

要旨

This paper addresses an under-explored problem of AI-assisted decision-making: when objective performance information of the machine learning model underlying a decision aid is absent or scarce, how do people decide their reliance on the model? Through three randomized experiments, we explore the heuristics people may use to adjust their reliance on machine learning models when performance feedback is limited. We find that the level of agreement between people and a model on decision-making tasks that people have high confidence in significantly affects reliance on the model if people receive no information about the model's performance, but this impact will change after aggregate-level model performance information becomes available. Furthermore, the influence of high confidence human-model agreement on people's reliance on a model is moderated by people's confidence in cases where they disagree with the model. We discuss potential risks of these heuristics, and provide design implications on promoting appropriate reliance on AI.

著者
Zhuoran Lu
Purdue University, West Lafayette, Indiana, United States
Ming Yin
Purdue University, West Lafayette, Indiana, United States
DOI

10.1145/3411764.3445562

論文URL

https://doi.org/10.1145/3411764.3445562

動画

会議: CHI 2021

The ACM CHI Conference on Human Factors in Computing Systems (https://chi2021.acm.org/)

セッション: Computational AI Development and Explanation

[B] Paper Room 02, 2021-05-14 01:00:00~2021-05-14 03:00:00 / [C] Paper Room 02, 2021-05-14 09:00:00~2021-05-14 11:00:00 / [A] Paper Room 02, 2021-05-13 17:00:00~2021-05-13 19:00:00
Paper Room 02
12 件の発表
2021-05-14 01:00:00
2021-05-14 03:00:00
日本語まとめ
読み込み中…