Large language models continue to become utilized in training situations to power embodied virtual instructors in Mixed Reality (MR). As these models increase in sophistication, a key question emerges: does designing an agent with similarity to the instructee improve outcomes? We present a user study with four guided assembly conditions: a non-matched instructor employing real-life instructor's attributes, a personality-matched instructor, a gender- and voice-matched instructor, and a fully matched instructor reflecting the user's Big Five personality, cloned voice, and gender. Participants completed an ordered assembly task and reported on instructional quality. Results show that fully matched instructors were overwhelmingly preferred and significantly enhanced social presence and user experience. However, these subjective benefits did not translate into faster task completion, revealing a trade-off between engagement and efficiency. These findings offer critical guidance for designing future embodied virtual instructors and highlight the nuanced role of personalization in human–AI interaction.
Virtual Reality (VR) depends on haptic feedback to create immersive experiences. Traditional passive proxies align physical props with their virtual counterparts but remain limited in scalability and expressiveness, or require bulky actuators to support reconfiguration. We introduce User-reconfigured Haptics, an approach that utilizes implicit user actions to reconfigure haptic interfaces to extend the gamut of VR haptic experiences. Modular 3D-printed cells are assembled into dynamic interfaces that express diverse haptic properties such as softness and weight. By masking physical reconfigurations with visual (re)mapping, user actions unnoticeably change haptic properties, resulting in user-driven, dynamic haptic experiences. User studies show that our design can provide distinguishable haptic experiences and is perceived as realistic and enjoyable in a VR task. We further showcase four applications: a fishing rod that changes weight and flexibility, a dynamic desktop of pressable buttons, a glove with adjustable squeezing, and a crossbow with variable pulling resistance.
Hybrid meetings are the new reality, yet they lack the richness of face-to-face interaction. In shared spaces, virtual or physical, interaction relies on more than words: proximity, non-verbal cues, and subtle movements all shape communication. Proximity captures how close we stand, where we face, and how we move around others. This paper investigates how proxemics in dyad and triad conversations translate across physical and virtual contexts. We conducted a study with 24 participants in four groups, completing social tasks under four conditions: face-to-face, co-located XR, remote XR, and hybrid XR. Our instrumentation of physical and virtual environments enables direct comparison. The work contributes a rich open dataset of 2.3 million rows across 32 columns, supporting comparative and replicable analysis. This is the first study to compare proxemics across face-to-face, co-located XR, remote XR, and hybrid XR, offering a foundation for understanding how social space translates across contexts.
When given the opportunity, people tend to try to reach coarse breakpoints for work interruptions. Coarse breakpoints are frequently associated with less effort when resuming the task. We investigated how supporting task resumption with augmented reality (AR)-cues affects this behavior. In a mixed factorial experiment, 50 participants performed a physical sorting task that included deferrable interruptions with varying distances to a coarse breakpoint, either with or without an AR-cue indicating the next correct step after interruption. Participants with AR-cue accepted interruptions at fine breakpoints more frequently than those without a cue, except when the coarse breakpoint was one step away, and reported less stress. Our findings indicate that AR-cues attenuate but do not eliminate the need for specific task resumption strategies, such as reaching a coarse breakpoint, and reduce the stress. Considering AR-cues for task resumption may be particularly beneficial for time-critical interruptions and fast-paced work environments.
As LLM-based Conversational Avatars increasingly act as collaborators in hybrid indoor navigation, understanding how their personality traits influence human-avatar proxemic behavior is becoming crucial. Prior work has largely examined personality effects in static or one-sided interactions such as sitting, standing, or approaching. However, there is a gap in research on how avatar personality and motion-related factors (e.g., walking speed) shape proxemics when both the human and avatar are in motion.
To address this, we developed an AR indoor navigation system featuring a Conversational Virtual Avatar (CVA) with three distinct personalities: Dominant, Warm, and Conscientious. The CVA guides users to destinations within the environment. In a between-subjects study ($N$=27), we found statistically significant effects of avatar personality and walking speed on proxemic behavior.
Our work contributes to a broader understanding of the role of personality and walking speed of a CVA on human-avatar proxemic behaviour during navigation.
Embodying photorealistic personalized avatars that closely resemble their users has recently become technologically achievable in virtual reality (VR). While previous work highlighted several benefits of personalization, its impact on emotions, a central VR element, remains underexplored. Given their high production cost, assessing the added value of photorealistic personalized avatars is essential before their adoption as a design feature. To address this, we designed and validated (n=302) four virtual environments (VEs) that induce emotions in different quadrants of Russell's Circumplex Model. Using these VEs, we systematically investigated the impact of avatar type (no avatar, generic avatar, personalized avatar) on perceived emotions and physiological responses (n=51). Generic avatars enhanced valence, arousal, embodiment, presence and selected physiological measures, while personalized avatars further amplified these effects and additionally increased heart rate variability, a marker of self-regulation. Our study demonstrates that photorealistic personalized avatars markedly enhance VR’s capacity to foster emotional engagement.
Exoskeletons are increasingly deployed in real-world contexts, where communicating critical system states or unexpected events is important for effective interaction. Haptic feedback offers a direct communication channel, integrating naturally with the actuated body region. Yet, it remains unclear how well haptic feedback is perceived while the body is being actuated. In a controlled study (N=24) with a shoulder exoskeleton, we compare four common haptic notification channels (poking, proprioceptive, thermal, vibrotactile) under different levels of actuation. Results show that poking was detected fastest, while thermal and proprioceptive notifications were most accurate and noticeable. Actuation levels affected error rates and noticeability, but not response times. Participants reported that thermal notifications aligned best with the actuation levels, producing a distinct sensation that blended naturally with movement. In contrast, proprioceptive notifications conveyed the strongest sense of urgency. We discuss design implications for leveraging haptic notifications to support embodied communication with exoskeletons.