Effortless manipulation informs and relies on preshaping: the subconscious posing of the hand before grasping. Virtual environments and the design of interaction techniques alters interaction requirements like contact and reach, forcing behavioural adaptation. We present a comparative study investigating preshaping behaviour across direct versus indirect (gaze-assisted) and bare-hand versus controller techniques on a docking task. Results reveal that response patterns scale with anticipated task difficulty, and that direct techniques elicit effective posing of the hand. Indirect techniques shortcut hand transport and, in turn lacks the sensory feedback to guide planning, inducing efficient but attenuated responses that necessitate compensatory manipulation and clutching. Notably, controllers that afford in-hand rotation allow users to extend their range of motion. These findings can inform interaction design to better afford preshaping and optimise 3D manipulation tasks.
Gaze-based selection in XR requires visual confirmation due to eye-tracking limitations and target ambiguity in 3D contexts. Current designs for wide-FOV displays use world-locked, central overlays, which is not conducive to always-on AR glasses. This paper introduces PeriphAR (/peh-ree-faar/), a visualization technique that leverages peripheral vision for feedback during gaze-based selection on a monocular AR display. In a first user study, we isolated text, color, and shape properties of target objects to compare peripheral selection cues. Peripheral vision was more sensitive to color than shape, but this sensitivity rapidly declined at lower contrast. To preserve preattentive processing of color, we developed two strategies to enhance color in users' peripheral vision. In a second user study, our strategy that maximized contrast of the target to the neighboring object with the most similar color was subjectively preferred. As proof of concept, we implemented PeriphAR in an end-to-end system to test performance with real-world object detection.
We present Roomify, a spatially-grounded transformation system that generates themed virtual environments anchored to users' physical rooms while maintaining spatial structure and functional semantics. Current VR approaches face a fundamental trade-off: full immersion sacrifices spatial awareness, while passthrough solutions break presence. Roomify addresses this through spatially-grounded transformation—treating physical spaces as "spatial containers'' that preserve key functional and geometric properties of furniture while enabling radical stylistic changes. Our pipeline combines in-situ 3D scene understanding, AI-driven spatial reasoning, and style-aware generation to create personalized virtual environments grounded in physical reality. We introduce a cross-reality authoring tool enabling fine-grained user control through MR editing and VR preview workflows. Two user studies validate our approach: one with 18 VR users demonstrates a 63% improvement in presence over passthrough and 26% over fully virtual baselines while maintaining spatial awareness; another with 8 design professionals confirms the system's creative expressiveness (scene quality: 5.95/7; creativity support: 6.08/7) and professional workflow value across diverse environments.
While Cave Automatic Virtual Environment (CAVE) systems have long enabled room-scale virtual reality and various kinds of interactivity, their content has largely remained predetermined. We present Storycaster, a generative AI CAVE system that transforms physical rooms into responsive storytelling environments. Unlike headset-based VR, Storycaster preserves spatial awareness, using live camera feeds to augment the walls with cylindrical projections, allowing users to create worlds that blend with their physical surroundings. Additionally, our system enables object-level editing, where physical items in the room can be transformed to their virtual counterparts in a story. A narrator agent guides participants, enabling them to co-create stories that evolve in response to voice commands, with each scene enhanced by generated ambient audio, dialogue, and imagery. Participants in our study (n=13) found the system highly immersive and engaging, identifying the narrator and audio as the most impactful elements, while also highlighting areas of improvement in latency and image resolution.
Indoor carbon dioxide (CO\textsubscript{2}) can rapidly accumulate to form invisible pollution \textit{hotspots}, posing significant health risks due to its odorless and colorless nature. Despite growing interest in wearable or stationary sensors for pollutant detection, effectively visualizing CO\textsubscript{2} levels and engaging individuals remains an ongoing challenge. In this paper, we develop a portable wrist-sized pollution sensor that detects CO\textsubscript{2} in real time at any indoor location and reveals \textit{CO\textsubscript{2} bubbles} by highlighting sudden spikes. In order to promote better ventilation habits and user awareness, we also develop a smartphone-based augmented reality (AR) game for users to locate and disperse these high-CO\textsubscript{2} zones. A user study with $35$ participants demonstrated increased engagement and heightened understanding of CO\textsubscript{2}’s health impacts. Our system's usability evaluations yielded a median score of $1.88$, indicating its strong practicality.
Young adults often take breaks from screen-intensive work by consuming digital content on mobile phones, which undermines rest through visual fatigue and inactivity. We introduce a design framework that embeds light break activities into media content on AR smart glasses, balancing engagement and recovery, which employs three strategies: (1) seamlessly guiding users by embedding activity cues aligned with media elements; (2) transitioning to audio-centric formats to reduce visual load while sustaining immersion; and (3) structuring sessions with "rise–peak–closure" pacing for smooth transitions. In a within-subjects study (N=16) comparing passive viewing, reminder-based breaks, and non-narrative activities, InteractiveBreak instantiated from our framework seamlessly guided activities, sustained engagement, and enhanced break quality. These findings demonstrate wearable AR's potential to support restorative relaxation by transforming breaks into engaging, meaningful experiences.
Personal computers and handheld devices provide keyboard shortcuts and swipe gestures to enable users to efficiently switch between applications, whereas today's virtual reality (VR) systems do not. In this work, we present an exploratory study on user interface aspects to support efficient switching between worlds in VR. We created eight interfaces that afford previewing and selecting from the available virtual worlds, including methods using portals and worlds-in-miniature (WiMs). To evaluate these methods, we conducted a controlled within-subjects empirical experiment (N=22) where participants frequently transitioned between six different environments to complete an object collection task. Our quantitative and qualitative results show that WiMs supported rapid acquisition of high-level spatial information while searching and were deemed most efficient by participants while portals provided fast pre-orientation. Finally, we present insights into the applicability, usability, and effectiveness of the VR world switching methods we explored, and provide recommendations for their application and future context/world switching techniques and interfaces.