Trust has been recognized as a central variable to explain the resistance to using automated systems (under-trust) and the overreliance on automated systems (over-trust). To achieve appropriate reliance, users’ trust should be calibrated to refect a system’s capabilities. Studies from various disciplines have examined diferent interventions to attain such trust calibration. Based on a literature body of 1000+ papers, we identifed 96 relevant publications which aimed to calibrate users’ trust in automated systems. To provide an in-depth overview of the state-of-the-art, we reviewed and summarized measurements of the trust calibration, interventions, and results of these eforts. For the numerous promising calibration interventions, we extract common design choices and structure these into four dimensions of trust calibration interventions to guide future studies. Our fndings indicate that the measurement of the trust calibration often limits the interpretation of the efects of diferent interventions. We suggest future directions for this problem.
https://doi.org/10.1145/3544548.3581197
The ACM CHI Conference on Human Factors in Computing Systems (https://chi2023.acm.org/)