A Case for Humans-in-the-Loop: Decisions in the Presence of Erroneous Algorithmic Scores

要旨

The increased use of algorithmic predictions in sensitive domains has been accompanied by both enthusiasm and concern. To understand the opportunities and risks of these technologies, it is key to study how experts alter their decisions when using such tools. In this paper, we study the adoption of an algorithmic tool used to assist child maltreatment hotline screening decisions. We focus on the question: Are humans capable of identifying cases in which the machine is wrong, and of overriding those recommendations? We first show that humans do alter their behavior when the tool is deployed. Then, we show that humans are less likely to adhere to the machine's recommendation when the score displayed is an incorrect estimate of risk, even when overriding the recommendation requires supervisory approval. These results highlight the risks of full automation and the importance of designing decision pipelines that provide humans with autonomy.

キーワード
Human-in-the-loop
Decision support
Algorithm aversion
Automation bias
Algorithm assisted decision making
Child welfare
著者
Maria De-Arteaga
Carnegie Mellon University, Pittsburgh, PA, USA
Riccardo Fogliato
Carnegie Mellon University, Pittsburgh, PA, USA
Alexandra Chouldechova
Carnegie Mellon University, Pittsburgh, PA, USA
DOI

10.1145/3313831.3376638

論文URL

https://doi.org/10.1145/3313831.3376638

会議: CHI 2020

The ACM CHI Conference on Human Factors in Computing Systems (https://chi2020.acm.org/)

セッション: Interactive ML & recommender systems

Paper session
312 NI'IHAU
5 件の発表
2020-04-28 01:00:00
2020-04-28 02:15:00
日本語まとめ
読み込み中…