Foundation models tested for clinical practice using human-designed metrics may mask fundamental differences in information processing. We investigated this using the clock drawing test (CDT), a cognitive screening tool. Three foundation models achieved 94% accuracy on conventional metrics, matching experts. However, upon decomposing the CDT into 24 questions across five cognitive domains, results diverged significantly. In cases with unanimous model agreement, they still disagreed with human raters in 22% cases. Performance varied drastically with 88% alignment with humans on rule-based executive questions but only 46% on context-dependent anticipatory thinking questions. We observed that models abstained three times more than humans, primarily owing to poor data quality. These findings show standard clinical evaluation metrics fail to capture how foundation models process information. High aggregate accuracy obscures component-level failures. We contribute a systematic evaluation of frontier models' healthcare capabilities, demonstrate theory-driven task decomposition, and discuss design implications for better human-AI collaborative systems.
ACM CHI Conference on Human Factors in Computing Systems