Pose-on-the-Go: Approximating User Pose with Smartphone Sensor Fusion and Inverse Kinematics

要旨

We present Pose-on-the-Go, a full-body pose estimation system that uses sensors already found in today’s smartphones. This stands in contrast to prior systems, which require worn or external sensors. We achieve this result via extensive sensor fusion, leveraging a phone’s front and rear cameras, the user-facing depth camera, touchscreen, and IMU. Even still, we are missing data about a user’s body (e.g., angle of the elbow joint), and so we use inverse kinematics to estimate and animate probable body poses. We provide a detailed evaluation of our system, benchmarking it against a professional-grade Vicon tracking system. We conclude with a series of demonstration applications that underscore the unique potential of our approach, which could be enabled on many modern smartphones with a simple software update.

著者
Karan Ahuja
Carnegie Mellon University, Pittsburgh, Pennsylvania, United States
Sven Mayer
Carnegie Mellon University, Pittsburgh, Pennsylvania, United States
Mayank Goel
Carnegie Mellon University, Pittsburgh, Pennsylvania, United States
Chris Harrison
Carnegie Mellon University, Pittsburgh, Pennsylvania, United States
DOI

10.1145/3411764.3445582

論文URL

https://doi.org/10.1145/3411764.3445582

動画

会議: CHI 2021

The ACM CHI Conference on Human Factors in Computing Systems (https://chi2021.acm.org/)

セッション: Vision and Sensing

[A] Paper Room 10, 2021-05-12 17:00:00~2021-05-12 19:00:00 / [C] Paper Room 10, 2021-05-13 09:00:00~2021-05-13 11:00:00 / [B] Paper Room 10, 2021-05-13 01:00:00~2021-05-13 03:00:00
Paper Room 10
13 件の発表
2021-05-12 17:00:00
2021-05-12 19:00:00
日本語まとめ
読み込み中…