Managing multiple activities in virtual reality (VR) is often hindered by fragmented workflows and disruptive application switching. We present JourneyVR, a metaphorical interaction model designed for spatial and sequential workflows that reframes tasks as continuous journeys rather than isolated sessions. Users construct a Journey Map as a layout of islands (tasks) and bridges (transitions), which expands into an immersive world where activities unfold as a coherent, embodied narrative. Through a formative study, a controlled comparison, and an expert evaluation, JourneyVR was shown to enhance experiential continuity, intention to use, and overall satisfaction compared to using a conventional app launcher. Participants highlighted how the metaphor fosters motivation and achievement, while we identify boundaries regarding task type, scalability, and flexibility. Our findings demonstrate that framing sequential activities as navigable journeys can transform fragmented tasks into meaningful narratives, offering concrete guidelines for sustained engagement and more flexible workflows in immersive environments.
Mixed Reality (MR) interfaces increasingly rely on gaze for interaction, yet distinguishing visual attention from intentional action remains difficult, leading to the Midas Touch problem. Existing solutions require explicit confirmations, while brain–computer interfaces may provide an implicit marker of intention using Stimulus-Preceding Negativity (SPN). We investigated how Intention (Select vs. Observe) and Feedback (With vs. Without) modulate SPN during gaze-based MR interactions. During realistic selection tasks, we acquired EEG and eye-tracking data from 28 participants.SPN was robustly elicited and sensitive to both factors: observation without feedback produced the strongest amplitudes, while intention to select and expectation of feedback reduced activity, suggesting SPN reflects anticipatory uncertainty rather than motor preparation. Complementary decoding with deep learning models achieved reliable person-dependent classification of user intention, with accuracies ranging from 75% to 97% across participants. These findings identify SPN as an implicit marker for building intention-aware MR interfaces that mitigate the Midas Touch.
This paper investigates associations, explicit representations of relations between multiple views in Mixed Reality (MR). While research on 2D desktop environments offers extensive recommendations for communicating relations between multiple views, MR environments lack such systematic guidance, necessitating adapted solutions that consider their spatial affordances. To address this gap, we systematically explored association techniques in existing research. Building on established 2D multi-view literature and refining insights from prior design principles, we developed a codebook to describe view relations and their representations. Applying it to a corpus of 44 immersive multi-view approaches, we identified recurring design strategies and synthesized them into a design space of visual association techniques adapted for immersive contexts. Based on a lightweight prototyping framework, we validate the utility of the design space through three envisioning scenarios, demonstrating how associations can support exploration, coordination, and sensemaking in MR applications. Our results inform the design of MR multi-view environments.
Current Extended Reality (XR) devices are increasingly being used as productivity tools. Compared to conventional setups, they allow for more flexibility in dynamic work locations beyond the desktop while providing a large virtual workspace. Recent research has explored how users organize digital documents and how virtual interfaces could be adapted to different locations and scenarios. However, there has been limited research on how location changes affect productivity tasks in XR environments and how users manually adapt virtual content layouts after such task interruptions. To address this, we conducted an exploratory user study (N=17) in which participants worked on a document-centered organization and planning task while changing locations every five minutes. We examined how these spatial transitions interfered with the task and identified layout strategies and patterns. From our observations and participant responses, we derived a set of design guidelines to inform the development of future XR knowledge work systems in mobile contexts.
Habitual static stretching is recommended in many ergonomics guidelines, but continuing this habit remains challenging to workers. We address two factors causing this difficulty: most workers lack an understanding of the appropriate mechanisms of stretching, and stretching sessions require the interruption of ongoing tasks. We propose Screen-Directed Stretching, a novel technique that supports neck stretching in extended reality (XR) workspaces without interrupting work. Our technique temporarily repositions the virtual screen that the user is focusing on, guiding them to rotate their head in yaw, pitch, and roll directions and to maintain a posture for a specific duration, thus facilitating muscle extension. Through two preliminary user studies, we developed a prototype that seamlessly switches the screen's coordinate system between world-bound and body-bound frames of reference, balancing practical workability and good stretching guidance. A user study (N=16) demonstrates that our technique effectively induces stretching movements while minimizing loss of task efficiency.
While visual augmentation dominates the augmented reality landscape, devices like Meta Ray-Ban audio smart glasses signal growing industry movement toward audio augmented reality (AAR). Hearing is a primary channel for sensing context, anticipating change, and navigating social space, yet AAR’s everyday potential remains underexplored. We address this gap through a collaborative autoethnography (N=5, authoring) and an online survey (N=74). We identify ten roles for AAR, grouped into three categories: task- and utility-oriented, emotional and social, and perceptual collaborator. These roles are further layered with a rhythmic and embodied collaborator framing, mapping them onto micro-, meso-, and macro-rhythms of everyday life. Our analysis surfaces nuanced tensions, such as blocking distractions without erasing social presence, highlighting the need for context-aware design. This paper contributes a foundational and forward-looking framework for AAR in everyday life, providing design groundwork for systems attuned to daily routines, sensory engagement, and social expectations.
While augmented reality (AR) research demonstrates benefits of embedded visualizations for gross motor training, its applicability to facial exercises remains under-explored. Providing effective real-time feedback for facial muscle training presents unique design challenges, given the complexity of facial musculature. We developed three AR feedback approaches varying in spatial relationship to the user: situated (screen-fixed), proxy-embedded (on a mannequin), and fully embedded (overlaid on the user's face). In a within-subjects study (N=24), we measured exercise accuracy, cognitive load, and user preference during facial training tasks. The embedded feedback reduced cognitive load and received higher preference ratings, while the situated feedback enabled more precise corrections and higher accuracy. Qualitative analysis revealed a key design tension: embedded feedback improved experience but created self-consciousness and interpretive difficulty. We distill these insights into design considerations addressing the trade-offs for facial training systems, with implications for rehabilitation, performance training, and motor skill acquisition.