Measuring and Understanding Trust Calibrations for Automated Systems: A Survey of the State-Of-The-Art and Future Directions

要旨

Trust has been recognized as a central variable to explain the resistance to using automated systems (under-trust) and the overreliance on automated systems (over-trust). To achieve appropriate reliance, users’ trust should be calibrated to refect a system’s capabilities. Studies from various disciplines have examined diferent interventions to attain such trust calibration. Based on a literature body of 1000+ papers, we identifed 96 relevant publications which aimed to calibrate users’ trust in automated systems. To provide an in-depth overview of the state-of-the-art, we reviewed and summarized measurements of the trust calibration, interventions, and results of these eforts. For the numerous promising calibration interventions, we extract common design choices and structure these into four dimensions of trust calibration interventions to guide future studies. Our fndings indicate that the measurement of the trust calibration often limits the interpretation of the efects of diferent interventions. We suggest future directions for this problem.

著者
Magdalena Wischnewski
Research Center for Trustworthy Data Science and Security, Dortmund, Germany
Nicole Krämer
Social Psychology - Media and Communication, Universität Duisburg-Essen, Duisburg, Germany
Emmanuel Müller
TU Dortmund, Dortmund, Germany
論文URL

https://doi.org/10.1145/3544548.3581197

動画

会議: CHI 2023

The ACM CHI Conference on Human Factors in Computing Systems (https://chi2023.acm.org/)

セッション: Trust and Explainable AI

Room X11+X12
6 件の発表
2023-04-24 23:30:00
2023-04-25 00:55:00