iPose: Interactive Human Pose Reconstruction from Video

要旨

Reconstructing 3D human poses from video has wide applications, such as character animation and sports analysis. Automatic 3D pose reconstruction methods have demonstrated promising results, but failure cases can still appear due to the diversity of human actions, capturing conditions, and depth ambiguities. Thus, manual intervention remains indispensable, which can be time-consuming and require professional skills. We thus present iPose, an interactive tool that facilitates intuitive human pose reconstruction from a given video. Our tool incorporates both human perception in specifying pose appearance to achieve controllability, and video frame processing algorithms to achieve precision and automation. A user manipulates the projection of a 3D pose via 2D operations on top of video frames, and the 3D poses are updated correspondingly while satisfying both kinematic and video frame constraints. The pose updates are propagated temporally to reduce user workload. We evaluate the effectiveness of iPose with a user study on the 3DPW dataset and expert interviews.

著者
Jingyuan Liu
The University of Tokyo, Tokyo, Japan
Li-Yi Wei
Adobe Research, San Jose, California, United States
Ariel Shamir
Reichman University, Herzliya, Israel
Takeo Igarashi
The University of Tokyo, Tokyo, Japan
論文URL

doi.org/10.1145/3613904.3641944

動画

会議: CHI 2024

The ACM CHI Conference on Human Factors in Computing Systems (https://chi2024.acm.org/)

セッション: Touch, Gesture and Posture

314
4 件の発表
2024-05-14 23:00:00
2024-05-15 00:20:00