Predicting the Noticeability of Dynamic Virtual Elements in Virtual Reality

要旨

While Virtual Reality (VR) systems can present virtual elements such as notifications anywhere, designing them so they are not missed by or distracting to users is highly challenging for content creators. To address this challenge, we introduce a novel approach to predict the noticeability of virtual elements. It computes the visual saliency distribution of what users see, and analyzes the temporal changes of the distribution with respect to the dynamic virtual elements that are animated. The computed features serve as input for a long short-term memory (LSTM) model that predicts whether a virtual element will be noticed. Our approach is based on data collected from 24 users in different VR environments performing tasks such as watching a video or typing. We evaluate our approach (n = 12), and show that it can predict the timing of when users notice a change to a virtual element within 2.56 sec compared to a ground truth, and demonstrate the versatility of our approach with a set of applications. We believe that our predictive approach opens the path for computational design tools that assist VR content creators in creating interfaces that automatically adapt virtual elements based on noticeability.

著者
Zhipeng Li
Carnegie Mellon University, Pittsburgh, Pennsylvania, United States
Yi Fei Cheng
Carnegie Mellon University, Pittsburgh, Pennsylvania, United States
Yukang Yan
Carnegie Mellon University, Pittsburgh, Pennsylvania, United States
David Lindlbauer
Carnegie Mellon University, Pittsburgh, Pennsylvania, United States
論文URL

https://doi.org/10.1145/3613904.3642399

動画

会議: CHI 2024

The ACM CHI Conference on Human Factors in Computing Systems (https://chi2024.acm.org/)

セッション: Understanding Immersive Experiences

311
3 件の発表
2024-05-14 01:00:00
2024-05-14 02:20:00