RadarVR: Exploring Spatiotemporal Visual Guidance in Cinematic VR

要旨

In cinematic VR, viewers can only see a limited portion of the scene at any time. As a result, they may miss important events outside their field of view. While there are many techniques which offer spatial guidance (where to look), there has been little work on temporal guidance (when to look). Temporal guidance offers viewers a look-ahead time and allows viewers to plan their head motion for important events. This paper introduces spatiotemporal visual guidance and presents a new widget, RadarVR, which shows both spatial and temporal information of regions of interest (ROIs) in a video. Using RadarVR, we conducted a study to investigate the impact of temporal guidance and explore trade-offs between spatiotemporal and spatial-only visual guidance. Results show spatiotemporal feedback allows users to see a greater percentage of ROIs, with 81% more seen from their initial onset. We discuss design implications for future work in this space.

著者
Sean J.. Liu
Stanford University, Stanford, California, United States
Rorik Henrikson
Meta, Toronto, Ontario, Canada
Tovi Grossman
University of Toronto, Toronto, Ontario, Canada
Michael Glueck
Meta, Toronto, Ontario, Canada
Mark Parent
Meta, Toronto, Ontario, Canada
論文URL

https://doi.org/10.1145/3586183.3606734

動画

会議: UIST 2023

ACM Symposium on User Interface Software and Technology

セッション: Sensory Shenanigans: Immersion and Illusions in Mixed Reality

Venetian Room
6 件の発表
2023-11-01 18:00:00
2023-11-01 19:20:00