High Accuracy and Hidden Disparities: Investigating Foundation Model Performance in Clinical Cognitive Assessment

要旨

Foundation models tested for clinical practice using human-designed metrics may mask fundamental differences in information processing. We investigated this using the clock drawing test (CDT), a cognitive screening tool. Three foundation models achieved 94% accuracy on conventional metrics, matching experts. However, upon decomposing the CDT into 24 questions across five cognitive domains, results diverged significantly. In cases with unanimous model agreement, they still disagreed with human raters in 22% cases. Performance varied drastically with 88% alignment with humans on rule-based executive questions but only 46% on context-dependent anticipatory thinking questions. We observed that models abstained three times more than humans, primarily owing to poor data quality. These findings show standard clinical evaluation metrics fail to capture how foundation models process information. High aggregate accuracy obscures component-level failures. We contribute a systematic evaluation of frontier models' healthcare capabilities, demonstrate theory-driven task decomposition, and discuss design implications for better human-AI collaborative systems.

著者
Abhay Sheel Anand
University of Massachusetts Amherst, Amherst, Massachusetts, United States
Deepak Ganesan
University of Massachusetts, Amherst, Amherst, Massachusetts, United States
Ravi Karkar
University of Massachusetts Amherst, Amherst, Massachusetts, United States

会議: CHI 2026

ACM CHI Conference on Human Factors in Computing Systems

セッション: Health Equity and Underserved Populations

P1 - Room 124
7 件の発表
2026-04-17 20:15:00
2026-04-17 21:45:00