People's decision-making abilities often fail to improve or may even erode when they rely on AI for decision support, even when the AI provides informative explanations. We argue this is partly because people intuitively seek contrastive explanations, which clarify the difference between the AI's decision and their own reasoning, while most AI systems offer "unilateral" explanations that justify the AI’s decision but do not account for users' knowledge and thinking. To address potential human knowledge gaps, we introduce a framework for generating human-centered contrastive explanations that explain the difference between AI's choice and a predicted, likely human choice about the same task. Results from a large-scale experiment (N = 628) demonstrate that contrastive explanations significantly enhance users' independent decision-making skills compared to unilateral explanations, without sacrificing decision accuracy. As concerns about deskilling in AI-supported tasks grow, our research demonstrates that integrating human reasoning into AI design can promote human skill development.
https://dl.acm.org/doi/10.1145/3706598.3713229
The ACM CHI Conference on Human Factors in Computing Systems (https://chi2025.acm.org/)