Behavior-Aware Anthropometric Scene Generation for Human-Usable 3D Layouts

要旨

Well-designed indoor scenes should prioritize how people can act within a space rather than merely what objects to place. However, existing 3D scene generation methods emphasize visual and semantic plausibility, while insufficiently addressing whether people can comfortably walk, sit, or manipulate objects. To bridge this gap, we present a Behavior-Aware Anthropometric Scene Generation framework. Our approach leverages vision–language models (VLMs) to analyze object–behavior relationships, translating spatial requirements into parametric layout constraints adapted to user-specific anthropometric data. We conducted comparative studies with state-of-the-art models using geometric metrics and a user perception study (N=16). We further conducted in-depth human-scale studies (individuals, N=20; groups, N=18). The results showed improvements in task completion time, trajectory efficiency, and human-object manipulation space. This study contributes a framework that bridges VLM-based interaction reasoning with anthropometric constraints, validated through both technical metrics and real-scale human usability studies.

著者
Semin Jin
Hanyang University, Seoul, Korea, Republic of
Donghyuk Kim
Hanyang University, Seoul, Korea, Republic of
Jeongmin Ryu
Hanyang University, Seoul, Korea, Republic of
Kyung Hoon Hyun
Hanyang University, Seoul, Korea, Republic of

会議: CHI 2026

ACM CHI Conference on Human Factors in Computing Systems

セッション: Generative AI and Creative Workflows

P1 - Room 123
6 件の発表
2026-04-14 20:15:00
2026-04-14 21:45:00