Human-AI Collaboration via Conditional Delegation: A Case Study of Content Moderation

要旨

Despite impressive performance in many benchmark datasets, AI models can still make mistakes, especially among out-of-distribution examples. It remains an open question how such imperfect models can be used effectively in collaboration with humans. Prior work has focused on AI assistance that helps people make individual high-stakes decisions, which is not scalable for a large amount of relatively low-stakes decisions, e.g., moderating social media comments. Instead, we propose conditional delegation as an alternative paradigm for human-AI collaboration where humans create rules to indicate trustworthy regions of a model. Using content moderation as a testbed, we develop novel interfaces to assist humans in creating conditional delegation rules and conduct a randomized experiment with two datasets to simulate in-distribution and out-of-distribution scenarios. Our study demonstrates the promise of conditional delegation in improving model performance and provides insights into design for this novel paradigm, including the effect of AI explanations.

著者
Vivian Lai
University of Colorado Boulder, Boulder, Colorado, United States
Samuel Carton
University of Colorado Boulder, Boulder, Colorado, United States
Rajat Bhatnagar
University of Colorado Boulder, Boulder, Colorado, United States
Q. Vera Liao
IBM Research, Yorktown Heights, New York, United States
Yunfeng Zhang
IBM Research, Yorktown Heights, New York, United States
Chenhao Tan
University of Chicago, Chicago, Illinois, United States
論文URL

https://dl.acm.org/doi/abs/10.1145/3491102.3501999

動画

会議: CHI 2022

The ACM CHI Conference on Human Factors in Computing Systems (https://chi2022.acm.org/)

セッション: Trust, Recommendation, and Explanable AI (XAI)

294
5 件の発表
2022-05-03 01:15:00
2022-05-03 02:30:00