Contrastive Explanations That Anticipate Human Misconceptions Can Improve Human Decision-Making Skills

要旨

People's decision-making abilities often fail to improve or may even erode when they rely on AI for decision support, even when the AI provides informative explanations. We argue this is partly because people intuitively seek contrastive explanations, which clarify the difference between the AI's decision and their own reasoning, while most AI systems offer "unilateral" explanations that justify the AI’s decision but do not account for users' knowledge and thinking. To address potential human knowledge gaps, we introduce a framework for generating human-centered contrastive explanations that explain the difference between AI's choice and a predicted, likely human choice about the same task. Results from a large-scale experiment (N = 628) demonstrate that contrastive explanations significantly enhance users' independent decision-making skills compared to unilateral explanations, without sacrificing decision accuracy. As concerns about deskilling in AI-supported tasks grow, our research demonstrates that integrating human reasoning into AI design can promote human skill development.

受賞
Honorable Mention
著者
Zana Buçinca
Harvard University, Cambridge, Massachusetts, United States
Siddharth Swaroop
Harvard University, Cambridge, Massachusetts, United States
Amanda E.. Paluch
University of Massachusetts Amherst, Amherst, Massachusetts, United States
Finale Doshi-Velez
Harvard University, Cambridge, Massachusetts, United States
Krzysztof Z.. Gajos
Harvard University, Allston, Massachusetts, United States
DOI

10.1145/3706598.3713229

論文URL

https://dl.acm.org/doi/10.1145/3706598.3713229

動画

会議: CHI 2025

The ACM CHI Conference on Human Factors in Computing Systems (https://chi2025.acm.org/)

セッション: Technologies for Decision Making

G402
6 件の発表
2025-04-30 23:10:00
2025-05-01 00:40:00
日本語まとめ
読み込み中…