No Explainability without Accountability: An Empirical Study of Explanations and Feedback in Interactive ML

要旨

Automatically generated explanations of how machine learning (ML) models reason can help users understand and accept them. However, explanations can have unintended consequences: promoting over-reliance or undermining trust. This paper investigates how explanations shape users' perceptions of ML models with or without the ability to provide feedback to them:(1) does revealing model flaws increase users' desire to "fix" them;(2) does providing explanations cause users to believe – wrongly – that models are introspective, and will thus improve over time. Through two controlled experiments – varying model quality – we show how the combination of explanations and user feedback impacted perceptions, such as frustration and expectations of model improvement.Explanations without opportunity for feedback were frustrating with a lower quality model, while interactions between explanation and feedback for the higher quality model suggest that detailed feedback should not be requested without explanation. Users expected model correction, regardless of whether they provided feedback or received explanations.

キーワード
interactive machine learning
explainable machine learning
著者
Alison Smith-Renner
University of Maryland, College Park, MD, USA
Ron Fan
University of Washington, Seattle, WA, USA
Melissa Birchfield
University of Washington, Seattle, WA, USA
Tongshuang Wu
University of Washington, Seattle, WA, USA
Jordan Boyd-Graber
University of Maryland, College Park, MD, USA
Daniel S. Weld
University of Washington, Seattle, WA, USA
Leah Findlater
University of Washington, Seattle, WA, USA
DOI

10.1145/3313831.3376624

論文URL

https://doi.org/10.1145/3313831.3376624

会議: CHI 2020

The ACM CHI Conference on Human Factors in Computing Systems (https://chi2020.acm.org/)

セッション: Interactive ML & recommender systems

Paper session
312 NI'IHAU
5 件の発表
2020-04-28 01:00:00
2020-04-28 02:15:00
日本語まとめ
読み込み中…