GenFaceUI: Meta-Design of Generative Personalized Facial Expression Interfaces for Intelligent Agents

要旨

This work investigates generative facial expression interfaces for intelligent agents from a meta-design perspective. We propose the Generative Personalized Facial Expression Interface (GPFEI) framework, which organizes rule-bounded spaces, character identity, and context--expression mapping to address challenges of control, coherence, and alignment in run-time facial expression generation. To operationalize this framework, we developed GenFaceUI, a proof-of-concept tool that enables designers to create templates, apply semantic tags, define rules, and iteratively test outcomes. We evaluated the tool through a qualitative study with twelve designers. The results show perceived gains in controllability and consistency, while revealing needs for structured visual mechanisms and lightweight explanations. These findings provide a conceptual framework, a proof-of-concept tool, and empirical insights that highlight both opportunities and challenges for advancing generative facial expression interfaces within a broader meta-design paradigm.

著者
Yate Ge
Tongji University, Shanghai, China
Lin Tian
Tongji University, Shanghai, China
Yi Dai
Tongji University, Shanghai, China
shuhan pan
University of Washington, Seattle, Washington, United States
Yiwen Zhang
Wuhan University of Technology, Wuhan, Hubei, China
Qi Wang
Tongji University, Shanghai, China
Weiwei Guo
Tongji University, Shanghai, China
Xiaohua Sun
Southern University of Science and Technology, Shenzhen, Guangdong, China

会議: CHI 2026

ACM CHI Conference on Human Factors in Computing Systems

セッション: Methods, Reviews & Ethical Futures

P1 - Room 131
7 件の発表
2026-04-15 18:00:00
2026-04-15 19:30:00