Interpretable Directed Diversity: Leveraging Model Explanations for Iterative Crowd Ideation

要旨

Feedback in creativity support tools can help crowdworkers to improve their ideations. However, current feedback methods require human assessment from facilitators or peers. This is not scalable to large crowds. We propose Interpretable Directed Diversity to automatically predict ideation quality and diversity scores, and provide AI explanations — Attribution, Contrastive Attribution, and Counterfactual Suggestions — to feedback on why ideations were scored (low), and how to get higher scores. These explanations provide multi-faceted feedback as users iteratively improve their ideations. We conducted formative and controlled user studies to understand the usage and usefulness of explanations to improve ideation diversity and quality. Users appreciated that explanation feedback helped focus their efforts and provided directions for improvement. This resulted in explanations improving diversity compared to no feedback or feedback with scores only. Hence, our approach opens opportunities for explainable AI towards scalable and rich feedback for iterative crowd ideation and creativity support tools.

著者
Yunlong Wang
National University of Singapore, Singapore, Singapore
Priyadarshini Venkatesh
University College London, London, United Kingdom
Brian Y. Lim
National University of Singapore, Singapore, Singapore
論文URL

https://dl.acm.org/doi/abs/10.1145/3491102.3517551

動画

会議: CHI 2022

The ACM CHI Conference on Human Factors in Computing Systems (https://chi2022.acm.org/)

セッション: Mistakes, Explainability

383-385
5 件の発表
2022-05-03 18:00:00
2022-05-03 19:15:00