Existing frameworks in visualization and HCI emphasize iteration, data grounding, and stakeholder needs; however, they have not fully explored how evaluation might persist across phases, adapt to compressed timelines, and aid stakeholder engagement and elicitation. Building on prior frameworks, we introduce an evaluation-first design that centers evaluation as a material component in the design process, expanding evaluation to include when it occurs, who participates, how results inform design, and how metrics anchor stakeholder engagement and adoption. Evaluation-first design (EvalOps) emphasizes tighter feedback loops, co-evaluation with stakeholders, malleable forms of evaluation, and goals-to-metrics grounding. We illustrate how EvalOps shapes design outcomes through two case studies of data-visualization and LLM-enabled reasoning tools, demonstrating how evaluation-driven design facilitates alignment and trust, uncovers opportunities earlier, and supports cohesiveness under rapidly changing constraints. We contrast EvalOps with current visualization design methodologies and discuss opportunities for expanding evaluation-centered framings to other active areas of design research.
ACM CHI Conference on Human Factors in Computing Systems