SharedNeRF: Leveraging Photorealistic and View-dependent Rendering for Real-time and Remote Collaboration

要旨

Collaborating around physical objects necessitates examining different aspects of design or hardware in detail when reviewing or inspecting physical artifacts or prototypes. When collaborators are remote, coordinating the sharing of views of their physical environment becomes challenging. Video-conferencing tools often do not provide the desired viewpoints for a remote viewer. While RGB-D cameras offer 3D views, they lack the necessary fidelity. We introduce SharedNeRF, designed to enhance synchronous remote collaboration by leveraging the photorealistic and view-dependent nature of Neural Radiance Field (NeRF). The system complements the higher visual quality of the NeRF rendering with the instantaneity of a point cloud and combines them through carefully accommodating the dynamic elements within the shared space, such as hand gestures and moving objects. The system employs a head-mounted camera for data collection, creating a volumetric task space on the fly and updating it as the task space changes. In our preliminary study, participants successfully completed a flower arrangement task, benefiting from SharedNeRF's ability to render the space in high fidelity from various viewpoints.

受賞
Honorable Mention
著者
Mose Sakashita
Microsoft Research, Redmond, Washington, United States
Bala Kumaravel
Microsoft Research, Redmond, Washington, United States
Nicolai Marquardt
Microsoft Research, Redmond, Washington, United States
Andrew D. Wilson
Microsoft Research, Redmond, Washington, United States
論文URL

doi.org/10.1145/3613904.3642945

動画

会議: CHI 2024

The ACM CHI Conference on Human Factors in Computing Systems (https://chi2024.acm.org/)

セッション: Learning and Working

317
5 件の発表
2024-05-16 20:00:00
2024-05-16 21:20:00