PREFAB: PREFerence-based Affective Modeling for Low-Budget Self-Annotation

要旨

Self-annotation is the gold standard for collecting affective state labels in affective computing. Existing methods typically rely on full annotation, requiring users to continuously label affective states across entire sessions. While this process yields fine-grained data, it is time-consuming, cognitively demanding, and prone to fatigue and errors. To address these issues, we present PREFAB, a low-budget retrospective self-annotation method that targets affective inflection regions rather than full annotation. Grounded in the peak-end rule and ordinal representations of emotion, PREFAB employs a preference learning model to detect relative affective changes, directing annotators to label only selected segments while interpolating the remainder of the stimulus. We further introduce a preview mechanism that provides brief contextual cues to assist annotation. We evaluate PREFAB through a technical performance study and a 25-participant user study. Results show that PREFAB outperforms baselines in modeling affective inflections while mitigating workload (and conditionally mitigating temporal burden). Importantly, PREFAB improves annotator confidence without degrading annotation quality.

受賞
Honorable Mention
著者
JaeYoung Moon
GIST, Gwangju, Korea, Republic of
Youjin Choi
Gwangju Institute of Science and Technology, Gwangju, Korea, Republic of
Yucheon Park
Gwangju Institute of Science and Technology, Buk-gu, Gwangju, Korea, Republic of
David Melhart
University of Southern Denmark, Odense, Denmark
Georgios Yannakakis
University of Malta, Msida, Malta
KyungJoong Kim
GIST, Gwangju, Korea, Republic of

会議: CHI 2026

ACM CHI Conference on Human Factors in Computing Systems

セッション: AI Systems for Human Goals

P1 - Room 122
7 件の発表
2026-04-14 18:00:00
2026-04-14 19:30:00