Who Should I Trust: AI or Myself? Leveraging Human and AI Correctness Likelihood to Promote Appropriate Trust in AI-Assisted Decision-Making

要旨

In AI-assisted decision-making, it is critical for human decision-makers to know when to trust AI and when to trust themselves. However, prior studies calibrated human trust only based on AI confidence indicating AI's correctness likelihood (CL) but ignored humans' CL, hindering optimal team decision-making. To mitigate this gap, we proposed to promote humans' appropriate trust based on the CL of both sides at a task-instance level. We first modeled humans' CL by approximating their decision-making models and computing their potential performance in similar instances. We demonstrated the feasibility and effectiveness of our model via two preliminary studies. Then, we proposed three CL exploitation strategies to calibrate users' trust explicitly/implicitly in the AI-assisted decision-making process. Results from a between-subjects experiment (N=293) showed that our CL exploitation strategies promoted more appropriate human trust in AI, compared with only using AI confidence. We further provided practical implications for more human-compatible AI-assisted decision-making.

著者
Shuai Ma
The Hong Kong University of Science and Technology, Hong Kong, China
Ying Lei
East China Normal University, Shanghai, China
Xinru Wang
Purdue University, West Lafayette, Indiana, United States
Chengbo Zheng
Hong Kong University of Science and Technology, Hong Kong, Hong Kong
Chuhan Shi
Hong Kong University of Science and Technology, Hong Kong, China
Ming Yin
Purdue University, West Lafayette, Indiana, United States
Xiaojuan Ma
Hong Kong University of Science and Technology, Hong Kong, Hong Kong
論文URL

https://doi.org/10.1145/3544548.3581058

動画

会議: CHI 2023

The ACM CHI Conference on Human Factors in Computing Systems (https://chi2023.acm.org/)

セッション: Trust and Explainable AI

Room X11+X12
6 件の発表
2023-04-24 23:30:00
2023-04-25 00:55:00