Results-Actionability Gap: Understanding How Practitioners Evaluate LLM Products in the Wild

要旨

How do product teams evaluate LLM-powered products? As organizations integrate large language models (LLMs) into digital products, their unpredictable nature makes traditional evaluation approaches inadequate, yet little is known about how practitioners navigate this challenge. Through interviews with nineteen practitioners across diverse sectors, we identify ten evaluation practices spanning informal 'vibe checks' to organizational meta-work. Beyond confirming four documented challenges, we introduce a novel fifth we call the results-actionability gap, in which practitioners gather evaluation data but cannot translate findings into concrete improvements. Drawing on patterns from successful teams, we contribute strategies to bridge this gap, supporting practitioners' formalization journey from ad-hoc interpretive practices (e.g., vibe checks) toward systematic evaluation. Our analysis suggests these interpretive practices are necessary adaptations to LLM characteristics rather than methodological failures. For HCI researchers, this presents a research opportunity to support practitioners in systematizing emerging practices rather than developing new evaluation frameworks.

著者
Willem van der Maden
IT University of Copenhagen, Copenhagen, Denmark
Malak Sadek
Cambridge University, Cambridge, United Kingdom
Ziang Xiao
Johns Hopkins University, Baltimore, Maryland, United States
Aske Mottelson
IT University of Copenhagen, Copenhagen, Denmark
Q. Vera Liao
University of Michigan, Ann Arbor, Ann Arbor, Michigan, United States
Jichen Zhu
IT University of Copenhagen, Copenhagen, Denmark

会議: CHI 2026

ACM CHI Conference on Human Factors in Computing Systems

セッション: AI Collaboration in Practice

P1 - Room 128
7 件の発表
2026-04-14 18:00:00
2026-04-14 19:30:00