Volumetric Mixed Reality Telepresence for Real-time Cross Modality Collaboration

要旨

Mixed-reality telepresence allows local and remote users feel as if they are present together in the same space. In this paper we report on a mixed-reality volumetric telepresence system that is adaptable, multi-user and cross-modal, i.e. combining augmented and virtual reality technologies with face-to-face interactions. The system extends state-of-art by creating full-body and environmental volumetric renderings in real-time over local enterprise networks. We report findings of an evaluation in a training scenario which was adapted for remote delivery and led by an industry professional. Analysis of interviews and observed behaviours identify varying attitudes towards virtually mediated full-body experiences and highlight the impact of volumetric mixed-reality telepresence to facilitate personal experiences of co-presence and to ground communication with interlocutors.

著者
Andrew Irlitti
University of Melbourne, Melbourne, Australia
Mesut Latifoglu
The University of Melbourne, Melbourne, Australia
Qiushi Zhou
University of Melbourne, Melbourne, Victoria, Australia
Martin N. Reinoso
University of Melbourne, Melbourne, Australia
Thuong Hoang
Deakin University, Geelong, Australia
Eduardo Velloso
University of Melbourne, Melbourne, Victoria, Australia
Frank Vetere
The University of Melbourne, Melbourne, Australia
論文URL

https://doi.org/10.1145/3544548.3581277

動画

会議: CHI 2023

The ACM CHI Conference on Human Factors in Computing Systems (https://chi2023.acm.org/)

セッション: Collaboration in Mixed Realities

Hall D
6 件の発表
2023-04-26 20:10:00
2023-04-26 21:35:00