Improving understandability of feature contributions in model-agnostic explainable AI tools

要旨

Model-agnostic explainable AI tools explain their predictions by means of ’local’ feature contributions. We empirically investigate two potential improvements over current approaches. The first one is to always present feature contributions in terms of the contribution to the outcome that is perceived as positive by the user (“positive framing”). The second one is to add “semantic labeling”, that explains the directionality of each feature contribution (“this feature leads to +5% eligibility”), reducing additional cognitive processing steps. In a user study, participants evaluated the understandability of explanations for different framing and labeling conditions for loan applications and music recommendations. We found that positive framing improves understandability even when the prediction is negative. Additionally, adding semantic labels eliminates any framing effects on understandability, with positive labels outperforming negative labels. We implemented our suggestions in a package ArgueView.

著者
Sophia Hadash
Jheronimus Academy of Data Science, 's Hertogenbosch, Noord-Brabant, Netherlands
Martijn C.. Willemsen
Jheronimus Academy of Data Science, Den Bosch, Netherlands
Chris Snijders
Technical university of Eindhoven, Eindhoven, Netherlands
Wijnand IJsselsteijn
Technical university of Eindhoven, Eindhoven, Netherlands
論文URL

https://dl.acm.org/doi/abs/10.1145/3491102.3517650

動画

会議: CHI 2022

The ACM CHI Conference on Human Factors in Computing Systems (https://chi2022.acm.org/)

セッション: Trust, Recommendation, and Explanable AI (XAI)

294
5 件の発表
2022-05-03 01:15:00
2022-05-03 02:30:00