Virtual reality has been effectively used for eliciting emotions, yet most research focuses on the intensity of affective responses rather than on how interaction influences those experiences. To address this gap, we advance a validated VR emotion-elicitation dataset through two key extensions. First, we add a new high-arousal, high-valence scene and validate its effectiveness in a within-subject study (N=24). Second, we incorporate interactive elements into each scene, creating both interactive and non-interactive versions to examine the impact of interaction on emotional responses. We evaluate interaction through a multimodal approach combining subjective ratings and physiological signals to capture both conscious and unconscious affective responses. Our evaluation study (N=84) shows that interaction not only amplifies emotions but modulates them in context, supporting coping in negative scenes and enhancing enjoyment in positive scenes. These findings highlight the potential of scene-tailored interaction for different applications, where regulating emotions is as important as eliciting them.
Surgical emergencies often trigger acute cognitive overload in novice physicians, impairing their decision-making under pressure. Although Virtual Reality–based Stress Inoculation Training (VR-SIT) shows promise, current systems fall short in delivering real-time, effective support during moments of peak stress. To bridge this gap, we first conducted a formative study (N=12) to uncover the core needs of novice physicians for immediate assistance under acute stress and identified three key intervention strategies: self-regulation aids, procedure guidance, and emotional/sensory support. Building on these insights, we designed and implemented a novel VR-SIT system that incorporates a just-in-time adaptive intervention framework, dynamically tailoring support to learners' cognitive and emotional states. We then validated these strategies in a user study (N=26). Our findings provide empirical evidence and design implications for next-generation VR medical training systems, supporting physicians in sustaining cognitive clarity and accurate decision-making in critical situations.
Physics governs everyday interaction, yet in Virtual Reality (VR) the fidelity of such interactions can diverge from reality. We investigate how Physical Fidelity (virtual object behavior) and Action Fidelity (virtual hand behavior) of physics-driven interaction shape user experience. In a within-subject study (n = 34), participants performed gamified tasks under three conditions: No-Physics (lower Physical and Action Fidelity), Object-Physics (higher Physical, lower Action Fidelity), and Full-Physics (higher Physical and Action Fidelity). Results show that higher Physical Fidelity reduces task efficiency and increases overall workload, with the No-Physics condition outperforming the others in these metrics. When combined with higher Action Fidelity, although efficiency gets even worse in some cases, the Full-Physics condition enhances body ownership and interaction quality. The hybrid Object-Physics condition consistently ranks lowest across all qualitative measures. Interpreting these results through the Interaction Fidelity Model, we offer design implications for VR applications.
Perspective-taking supports empathy, bias reduction, and social cognition, and virtual reality (VR) promises to facilitate it by immersing users into simulated perspectives.
Yet, how VR qualities, such as spatial presence and agency, influence -- and potentially enhance -- perspective-taking remains unclear.
We conducted two controlled experiments (N=23 and N=25) using a VR paradigm that manipulated spatial presence (2D vs. 3D) and avatar agency (synchronous vs. delayed motion, and with vs. without a task). We measured cognitive perspective-taking using an anchoring task and self-avatar overlap as an indicator of affective perspective-taking.
Results show that increased spatial presence and agency both significantly enhanced affective perspective-taking.
However, neither significantly affected cognitive perspective-taking. These findings suggest that while VR qualities can enhance how close a person feels to an avatar, they may not strongly affect cognitive perspective-taking. By studying the influence of two VR qualities, we offer guidance on building more effective perspective-taking experiences in immersive environments.
Playtesting is widely used in the game industry to identify design flaws and evaluate player experience, yet little research explores how to effectively visualize and analyze playtesting data. This challenge is particularly pronounced in motion-based VR games, which involve physical movements and interactions tracked through multimodal inputs, resulting in complex multidimensional data. To better understand the challenges designers face, we conducted a formative study with 30 practitioners in the VR domain to characterize playtesting workflows and associated tasks.
Based on these findings, we present HieraVisVR, a hierarchical visual analytics framework that incorporates body-motion-related data to help designers identify player behaviors and critical game moments, thereby simplifying their workflow. We demonstrate the applicability of HieraVisVR in three different applications and evaluate our system with playtesting experts through an analysis of motion-based game data. The study results suggest that our system enhances playtesters' understanding of the gameplay and improves their data analysis workflow. playtest results of VR games in a top-down manner.
How do we evaluate experiences in immersive environments? Despite decades of research in immersive technologies such as virtual reality, the field remains fragmented. Studies rely on overlapping constructs, heterogeneous instruments, and little agreement on what counts as immersive experience. To better understand this landscape, we conducted a bottom-up scoping review of 375 papers published in ACM CHI, UIST, VRST, SUI, IEEE VR, ISMAR, and TVCG. Our analysis reveals that evaluation practices are often domain- and purpose-specific, shaped more by local choices than by shared standards. Yet this diversity also points to new directions. Instead of multiplying instruments, researchers benefit from integrating and refining them into smarter measures. Rather than focusing only on system outputs, evaluations must center the user’s lived experience. Computational modeling offers opportunities to bridge signals across methods, but lasting progress requires open and sustainable evaluation practices that support comparability and reuse. Ultimately, our contribution is to map current practices and outline a forward-looking agenda for immersive experience research.
Question-asking is one of the key indicators of cognitive engagement. However, understanding how the distinct psychological affordances of presentation media shape learners' spoken inquiries with embodied Intelligent Virtual Agents (IVAs) remains limited. To systematically examine this process, we propose a 5W1H-based framework for analyzing learner questions.
Using this framework, we conducted a user study comparing an Augmented Reality-based IVA (AR-IVA) deployed in the physical environment with a screen-based IVA (Video-IVA) during cardiopulmonary resuscitation (CPR) instruction. Results showed that the AR-IVA elicited higher spatial and social presence and promoted more frequent and longer questions focused on clarification and understanding. In contrast, the Video-IVA encouraged questions regarding procedural refinement. Presence acted as a selective filter, shaping the timing and topic of questions rather than as a universal mediator. These effects were significantly moderated by learners’ motivational and strategic characteristics toward learning. Based on these findings, we propose design implications for IVA-supported learning systems.