ClearFairy: Capturing Creative Workflows through Decision Structuring, In-Situ Questioning, and Rationale Inference

要旨

Capturing professionals’ decision-making in creative workflows (e.g., UI/UX) is essential for reflection, collaboration, and knowledge sharing, yet existing methods often leave rationales incomplete and implicit decisions hidden. To address this, we present the CLEAR approach, which structures reasoning into cognitive decision steps—linked units of actions, artifacts, and explanations, making decisions traceable with generative AI. Building on CLEAR, we introduce ClearFairy, a think-aloud AI assistant for UI design that detects weak explanations, asks lightweight clarifying questions, and infers missing rationales. In a study with twelve professionals, 85% of ClearFairy’s inferred rationales were accepted (as-is or with revisions). Notably, the system increased "strong explanations"'—rationales providing sufficient causal reasoning—from 14% to 83% without adding cognitive demand. Furthermore, exploratory applications demonstrate that captured steps can enhance generative AI agents in Figma, yielding predictions better aligned with professionals and producing coherent outcomes. We release a dataset of 417 decision steps to support future research.

著者
Kihoon Son
KAIST, Daejeon, Korea, Republic of
DaEun Choi
KAIST, Daejeon, Korea, Republic of
Tae Soo Kim
KAIST, Daejeon, Korea, Republic of
Young-Ho Kim
NAVER AI Lab, Seongnam, Korea, Republic of
Sangdoo Yun
NAVER AI Lab, Seongnam, Gyeonggi, Korea, Republic of
Juho Kim
KAIST, Daejeon, Korea, Republic of

会議: CHI 2026

ACM CHI Conference on Human Factors in Computing Systems

セッション: Human-in-the-Loop Machine Learning Interfaces

P1 - Room 111
7 件の発表
2026-04-17 18:00:00
2026-04-17 19:30:00