"Why is 'Chicago' deceptive?" Towards Building Model-Driven Tutorials for Humans

要旨

To support human decision making with machine learning models, we often need to elucidate patterns embedded in the models that are unsalient, unknown, or counterintuitive to humans. While existing approaches focus on explaining machine predictions with real-time assistance, we explore model-driven tutorials to help humans understand these patterns in a train- ing phase. We consider both tutorials with guidelines from scientific papers, analogous to current practices of science communication, and automatically selected examples from training data with explanations. We use deceptive review detection as a testbed and conduct large-scale, randomized human-subject experiments to examine the effectiveness of such tutorials. We find that tutorials indeed improve human performance, with and without real-time assistance. In particular, although deep learning provides superior predictive performance than simple models, tutorials and explanations from simple models are more useful to humans. Our work suggests future directions for human-centered tutorials and explanations towards a synergy between humans and AI.

キーワード
explanations
interpretable machine learning
tutorials
deception detection
著者
Vivian Lai
University of Colorado Boulder, Boulder, CO, USA
Han Liu
University of Colorado Boulder, Boulder, CO, USA
Chenhao Tan
University of Colorado Boulder, Boulder, CO, USA
DOI

10.1145/3313831.3376873

論文URL

https://doi.org/10.1145/3313831.3376873

会議: CHI 2020

The ACM CHI Conference on Human Factors in Computing Systems (https://chi2020.acm.org/)

セッション: Augmenting work & productivity

Paper session
316C MAUI
5 件の発表
2020-04-28 01:00:00
2020-04-28 02:15:00
日本語まとめ
読み込み中…