Crowdsourcing and Evaluating Concept-driven Explanations of Machine Learning Models

要旨

An important challenge in building explainable artificially intelligent (AI) systems is designing interpretable explanations. AI models often use low-level data features which may be hard for humans to interpret. Recent research suggests that situating machine decisions in abstract, human understandable concepts can help. However, it is challenging to determine the right level of conceptual mapping. In this research, we use granularity (of data features) and context (of data instance) as ways to determine this conceptual mapping. Based on these measures, we explore strategies for designing explanations in classification models. We introduce an end-to-end concept elicitation pipeline that supports gathering high-level concepts for a given data set. Through crowd-sourced experiments, we examine how providing conceptual information shapes the effectiveness of explanations, finding that a balance between coarse and fine-grained explanations help users better estimate model predictions. We organize our findings into systematic themes that can inform design considerations for future systems.

著者
Swati Mishra
Cornell University, Ithaca, New York, United States
Jeffrey M. Rzeszotarski
Cornell University, Ithaca, New York, United States
論文URL

https://doi.org/10.1145/3449213

動画

会議: CSCW2021

The 24th ACM Conference on Computer-Supported Cooperative Work and Social Computing

セッション: Interpreting and Explaining AI

Papers Room B
8 件の発表
2021-10-26 19:00:00
2021-10-26 20:30:00