ARticulate: Interactive Visual Guidance for Demonstrated Rotational Degrees of Freedom in Mobile AR

要旨

Mobile Augmented Reality (AR) offers a powerful way to provide spatially-aware guidance for real-world applications. In many cases, these applications involve the configuration of a camera or articulated subject, asking users to navigate several spatial degrees of freedom (DOF) at once. Most guidance for such tasks relies on decomposing available DOF into subspaces that can be more easily mapped to simple 1D or 2D visualizations. Unfortunately, different factorizations of the same motion often map to very different visual feedback, and finding the factorization that best matches a user’s intuition can be difficult. We propose an interactive approach that infers rotational degrees of freedom from short user demonstrations. Users select one or two DOFs at a time by demonstrating a small range of motion, which we use to learn a rotational frame that best aligns with user control of the object. We show that deriving visual feedback from this inferred learned rotational frame leads to improved task completion times on 6DOF guidance tasks compared to standard default reference frames used in most mixed reality applications.

著者
Nhan Tran
Cornell University, Ithaca, New York, United States
Ethan Yang
Cornell University, Ithaca, New York, United States
Abe Davis
Cornell Tech, Cornell University, New York, New York, United States
DOI

10.1145/3706598.3713179

論文URL

https://dl.acm.org/doi/10.1145/3706598.3713179

動画

会議: CHI 2025

The ACM CHI Conference on Human Factors in Computing Systems (https://chi2025.acm.org/)

セッション: AR Interaction

Annex Hall F206
7 件の発表
2025-04-30 23:10:00
2025-05-01 00:40:00
日本語まとめ
読み込み中…