PoseVEC: Authoring Adaptive Pose-aware Effects Using Visual Programming and Demonstrations

要旨

Pose-aware visual effects where graphics assets and animations are rendered reactively to the human pose have become increasingly popular, appearing on mobile devices, the web, or even head-mounted displays like AR glasses. Yet, creating such effects still remains difficult for novices. In a traditional video editing workflow, a creator could utilize keyframes to create expressive but non-adaptive results which cannot be reused for other videos. Alternatively, programming-based approaches allow users to develop interactive effects, but are cumbersome for users to quickly express their creative intents. In this work, we propose a lightweight visual programming workflow for authoring adaptive and expressive pose effects. By combining a programming by demonstration paradigm with visual programming, we simplify three key tasks in the authoring process: creating pose triggers, designing animation parameters, and rendering. We evaluated our system with a qualitative user study and a replicated example study, finding that all participants can create effects efficiently.

著者
Yongqi Zhang
George Mason University, Fairfax, Virginia, United States
Cuong Nguyen
Adobe Research, San Francisco, California, United States
Rubaiat Habib Kazi
Adobe Research, Seattle, Washington, United States
Lap-Fai Yu
George Mason University, Fairfax, Virginia, United States
論文URL

https://doi.org/10.1145/3586183.3606788

動画

会議: UIST 2023

ACM Symposium on User Interface Software and Technology

セッション: Words and Visuals: Authoring Tools for Text and Images

Gold Room
6 件の発表
2023-11-01 19:50:00
2023-11-01 21:10:00