HybridTrak: Adding Full-Body Tracking to VR Using an Off-the-Shelf Webcam

要旨

Full-body tracking in virtual reality improves presence, allows interaction via body postures, and facilitates better social expression among users. However, full-body tracking systems today require a complex setup fixed to the environment (e.g., multiple lighthouses/cameras) and a laborious calibration process, which goes against the desire to make VR systems more portable and integrated. We present HybridTrak, which provides accurate, real-time full-body tracking by augmenting inside-out upper-body VR tracking systems with a single external off-the-shelf RGB web camera. HybridTrak converts and transforms users' 2D full-body poses from the webcam to 3D poses leveraging the inside-out upper-body tracking data with a full-neural solution. We showed HybridTrak is more accurate than RGB or depth-based tracking method on the MPI-INF-3DHP dataset. We also tested HybridTrak in the popular VRChat app and showed that body postures presented by HybridTrak are more distinguishable and more natural than a solution using an RGBD camera.

著者
Jackie (Junrui). Yang
Stanford University, Stanford, California, United States
Tuochao Chen
EECS, Beijing, Beijing, China
Fang Qin
Carnegie Mellon University , Pittsburgh, Pennsylvania, United States
Monica Lam
Stanford University, Stanford, California, United States
James A.. Landay
Stanford University, Stanford, California, United States
論文URL

https://dl.acm.org/doi/abs/10.1145/3491102.3502045

動画

会議: CHI 2022

The ACM CHI Conference on Human Factors in Computing Systems (https://chi2022.acm.org/)

セッション: Input Techniques

292
5 件の発表
2022-05-03 20:00:00
2022-05-03 21:15:00