XR Perception

会議の名前
UIST 2022
Exploring Sensory Conflict Effect Due to Upright Redirection While Using VR in Reclining & Lying Positions
要旨

When users use Virtual Reality (VR) in nontraditional postures, such as while reclining or lying in relaxed positions, their views lean upwards and need to be corrected, to make sure they see upright contents and perceive the interactions as if they were standing. Such upright redirection is expected to cause visual-vestibular-proprioceptive conflict, affecting users' internal perceptions (e.g., body ownership, presence, simulator sickness) and external perceptions (e.g., egocentric space perception) in VR. Different body reclining angles may affect vestibular sensitivity and lead to the dynamic weighting of multi-sensory signals in the sensory integration. In the paper, we investigated the impact of upright redirection on users' perceptions, with users' physical bodies tilted at various angles backward and views upright redirected accordingly. The results showed that upright redirection led to simulator sickness, confused self-awareness, weak upright illusion, and increased space perception deviations to various extents when users are at different reclining positions, and the situations were the worst at the 45-degree conditions. Based on these results, we designed some illusion-based and sensory-based methods, that were shown effective in reducing the impact of sensory conflict through preliminary evaluations.

著者
Tianren Luo
Institute of Software, Beijing, China
Zhenxuan He
Capital Normal University, Beijing, China
Chenyang Cai
Beijing University of Technology, Beijing, China
Teng Han
Institute of Software, Chinese Academy of Sciences, Beijing, China
Zhigeng Pan
College of Artificial Intelligence, Nanjing, China
Feng Tian
Institute of software, Chinese Academy of Sciences, Beijing, China
論文URL

https://doi.org/10.1145/3526113.3545692

Color-to-Depth Mappings as Depth Cues in Virtual Reality
要旨

Despite significant improvements to Virtual Reality (VR) technologies, most VR displays are fixed focus and depth perception is still a key issue that limits the user experience and the interaction performance. To supplement humans' inherent depth cues (e.g., retinal blur, motion parallax), we investigate users' perceptual mappings of distance to virtual objects' appearance to generate visual cues aimed to enhance depth perception. As a first step, we explore color-to-depth mappings for virtual objects so that their appearance differs in saturation and value to reflect their distance. Through a series of controlled experiments, we elicit and analyze users' strategies of mapping a virtual object's hue, saturation, value and a combination of saturation and value to its depth. Based on the collected data, we implement a computational model that generates color-to-depth mappings fulfilling adjustable requirements on confusion probability, number of depth levels, and consistent saturation/value changing tendency. We demonstrate the effectiveness of color-to-depth mappings in a 3D sketching task, showing that compared to single-colored targets and strokes, with our mappings, the users were more confident in the accuracy without extra cognitive load and reduced the perceived depth error by 60.8%. We also implement four VR applications and demonstrate how our color cues can benefit the user experience and interaction performance in VR.

著者
Zhipeng Li
Department of Computer Science and Technology, Tsinghua University, Beijing, China
Yikai Cui
Tsinghua University, Beijing, China
Tianze Zhou
Tsinghua University, Beijing, China
Yu Jiang
Tsinghua University, Beijing, China
Yuntao Wang
Tsinghua University, Beijing, China
Yukang Yan
Tsinghua University, Beijing, China
Michael Nebeling
University of Michigan, Ann Arbor, Michigan, United States
Yuanchun Shi
Tsinghua University, Beijing, China
論文URL

https://doi.org/10.1145/3526113.3545646

Look over there! Investigating Saliency Modulation for Visual Guidance with Augmented Reality Glasses
要旨

Augmented Reality has traditionally been used to display digital overlays in real environments. Many AR applications such as remote collaboration, picking tasks, or navigation require highlighting physical objects for selection or guidance. These highlights use graphical cues such as outlines and arrows. Whilst effective, they greatly contribute to visual clutter, possibly occlude scene elements, and can be problematic for long-term use. Substituting those overlays, we explore saliency modulation to accentuate objects in the real environment to guide the user’s gaze. Instead of manipulating video streams, like done in perception and cognition research, we investigate saliency modulation of the real world using optical-see-through head-mounted displays. This is a new challenge, since we do not have full control over the view of the real environment. In this work we provide our specific solution to this challenge, including built prototypes and their evaluation.

著者
Jonathan Sutton
University of Otago, Dunedin, New Zealand
Tobias Langlotz
University of Otago, Dunedin, New Zealand
Alexander Plopski
University of Otago, Dunedin, New Zealand
Stefanie Zollmann
University of Otago, Dunedin, New Zealand
Yuta Itoh
The University of Tokyo, Tokyo, Japan
Holger Regenbrecht
University of Otago, Dunedin, Otago, New Zealand
論文URL

https://doi.org/10.1145/3526113.3545633

VRhook: A Data Collection Tool for VR Motion Sickness Research
要旨

Despite the increasing popularity of VR games, one factor hindering the industry's rapid growth is motion sickness experienced by the users. Symptoms such as fatigue and nausea severely hamper the user experience. Machine Learning methods could be used to automatically detect motion sickness in VR experiences, but generating the extensive labeled dataset needed is a challenging task. It needs either very time consuming manual labeling by human experts or modification of proprietary VR application source codes for label capturing. To overcome these challenges, we developed a novel data collection tool, VRhook, which can collect data from any VR game without needing access to its source code. This is achieved by dynamic hooking, where we can inject custom code into a game's run-time memory to record each video frame and its associated transformation matrices. Using this, we can automatically extract various useful labels such as rotation, speed, and acceleration. In addition, VRhook can blend a customized screen overlay on top of game contents to collect self-reported comfort scores. In this paper, we describe the technical development of VRhook, demonstrate its utility with an example, and describe directions for future research.

著者
Elliott Wen
The University of Auckland, Auckland, New Zealand
Tharindu Indrajith. Kaluarachchi
The University of Auckland, Auckland, New Zealand
Shamane Siriwardhana
Auckland Bio engineering Institute, University Of Auckland , Auckland, Auckland, New Zealand
Vanessa Tang
University of Auckland, Aucklad, New Zealand
Mark Billinghurst
University of South Australia, Mawson Lakes, Australia
Robert W.. Lindeman
University of Canterbury, Christchurch, New Zealand
Richard Yao
Facebook, San Francisco, California, United States
James Lin
Facebook, San Francisco, California, United States
Suranga Nanayakkara
Department of Information Systems and Analytics, National University of Singapore, Singapore, Singapore
論文URL

https://doi.org/10.1145/3526113.3545656