Understanding the Effect of Out-of-distribution Examples and Interactive Explanations on Human-AI Decision Making

要旨

Although AI holds promise for improving human decision making in societally critical domains, it remains an open question how human-AI teams can reliably outperform AI alone and human alone in challenging prediction tasks (also known as complementary performance). We explore two directions to understand the gaps in achieving complementary performance. First, we argue that the typical experimental setup limits the potential of human-AI teams. To account for lower AI performance out-of-distribution than in-distribution because of distribution shift, we design experiments with different distribution types and investigate human performance for both in-distribution and out-of-distribution examples. Second, we develop novel interfaces to support interactive explanations so that humans can actively engage with AI assistance. Using virtual pilot studies and large-scale randomized experiments across three tasks, we demonstrate a clear difference between in-distribution and out-of-distribution, and observe mixed results for interactive explanations: while interactive explanations improve human perception of AI assistance’s usefulness, they may reinforce human biases and lead to limited performance improvement. Overall, our work points out critical challenges and future directions towards enhancing human performance with AI assistance.

著者
Han Liu
University of Chicago, Chicago, Illinois, United States
Vivian Lai
University of Colorado Boulder, Boulder, Colorado, United States
Chenhao Tan
University of Chicago, Chicago, Illinois, United States
論文URL

https://doi.org/10.1145/3479552

動画

会議: CSCW2021

The 24th ACM Conference on Computer-Supported Cooperative Work and Social Computing

セッション: Algorithms and Decision Making

Papers Room B
8 件の発表
2021-10-25 23:00:00
2021-10-26 00:30:00