Designing Staged Evaluation Workflows for LLMs: Integrating Domain Experts, Lay Users, and Model-Generated Evaluation Criteria

要旨

Large Language Models (LLMs) are increasingly utilized for domain-specific tasks, yet evaluating their outputs remains challenging. A common strategy is to apply evaluation criteria to assess alignment with domain-specific standards, yet little is understood about how criteria differ across sources or where each type is most useful in the evaluation process. This study investigates criteria developed by domain experts, lay users, and LLMs to identify their complementary roles within an evaluation workflow. Results show that experts produce fact-based criteria with long-term value, lay users emphasize usability with a shorter-term focus, and LLMs target procedural checks for immediate task requirements. We also examine how criteria evolve between a priori and a posteriori phases, noting drift across stages as well as convergence in the a posteriori phase. Based on our observations, we propose design guidelines for a staged evaluation workflow combining the complementary strengths of these sources to balance quality, cost, and scalability.

著者
Annalisa Szymanski
University of Notre Dame, South Bend, Indiana, United States
Simret Araya. Gebreegziabher
University of Notre Dame, Notre Dame, Indiana, United States
Oghenemaro Anuyah
Microsoft, Redmond, Washington, United States
Ronald Metoyer
University of Notre Dame, South Bend, Indiana, United States
Toby Jia-Jun. Li
University of Notre Dame, Notre Dame, Indiana, United States

会議: CHI 2026

ACM CHI Conference on Human Factors in Computing Systems

セッション: Explaining and Evaluating AI Systems

Area 1 + 2 + 3: theatre
7 件の発表
2026-04-16 20:15:00
2026-04-16 21:45:00