Mapping the Landscape of Affective Extended Reality: A Scoping Review of Biodata-Driven Systems for Understanding and Sharing Emotions
説明

This paper introduces the notion of affective extended reality (XR) to characterise XR systems that use biodata to enable understanding of emotions. The HCI literature contains many such systems, but they have not yet been mapped into a coherent whole. To address this, we conducted a scoping review of 82 papers that explore the nexus of biodata, emotions, and XR. We analyse the technologies used in these systems, the interaction techniques employed, and the methods used to evaluate their effectiveness. Through our analysis, we contribute a mapping of the current landscape of affective XR, revealing diversity in the goals for enabling emotion sharing. We demonstrate how HCI researchers have explored the design of the interaction flows in XR biofeedback systems, highlighting key design dimensions and challenges in understanding emotions. We discuss underused approaches for emotion sharing and highlight opportunities for future research on affective XR.

日本語まとめ
読み込み中…
読み込み中…
Prosocial AI Apologies on the Road: Emotional Compensation for Other Drivers' Misbehavior
説明

Aggressive driving often triggers anger and retaliatory behaviors, posing threats to traffic safety. This paper proposes an AI-driven apology mechanism based on an Augmented Reality Head-Up Display (AR-HUD), which delivers immediate apologies on behalf of offending drivers during traffic conflicts and repairs damaged social relations through prosocial lies. We conducted a 2 (scenario risk: high vs. low) × 5 (apology depth) mixed-design experiment (N = 40) to evaluate its effectiveness. Results show that AI apologies enhanced positive emotions and forgiveness intentions while reducing anger, with participants also perceiving psychological benefits. These effects were consistent across both high- and low-risk scenarios. Our findings offer a practical design pathway for human-AI emotional regulation in traffic contexts.

日本語まとめ
読み込み中…
読み込み中…
FeelWave: Enabling Emotion-Aware Voice Interaction through Noise-Robust mmWave Emotion Sensing
説明

Voice has been a primary interaction mode with LLM-powered assistants. Beyond semantics, voice carries emotional cues with potential to guide empathetic system responses. Yet, robust vocal emotion sensing in noise and its use in optimizing interactions remain underexplored. In response, we present FeelWave, which achieves empathetic voice interaction through noise-robust mmWave emotion sensing and structured LLM prompts. It extracts robust vocal information from mmWave signals, applies audio-to-mmWave transfer learning for efficient emotion recognition, and employs chain-of-thought-based query optimization to enable emotion-adaptive responses. Evaluations show that FeelWave achieves 92.3% emotion recognition accuracy and remains robust in noisy environments, yielding a 62.9 percentage-point gain over audio-based models. In voice interaction studies, 74.3% of users prefer FeelWave, reporting significantly higher satisfaction than a baseline without emotion sensing (4.37 vs. 3.22). A SUS score of 88.3 confirms FeelWave's high usability in real-world deployment. We hope this work will inspire more empathetic, user-centered AI-driven assistants.

日本語まとめ
読み込み中…
読み込み中…
BioHaptics: Emotion Modulation via Biosignal-Inspired Ultrasonic Mid-air Haptics in Video-Watching Experiences
説明

Ultrasonic mid-air haptics (UMH) offers a novel way to modulate affective responses through contactless, biosignal features inspired haptics (i.e., BioHaptics). Yet, the effects of different BioHaptics on affective responses of emotions experienced during video-watching context are unclear, limiting the flexible emotion modulation and broader use of UMH as a natural affective design channel in enhancing everyday media consumption. This paper explores how BioHaptics encoded from different biosignal features (e.g., heart rate (HR), heart rate variability (HRV), and respiration amplitude (RA)) impact emotions in video-watching contexts. In two experiments with 70 participants, we assessed affective responses while adding three BioHaptics during the video-watching processes. Results showed that HR and RA haptics promoted RA and HRV, with potential increments in emotional pleasantness and regulation; HRV haptics lowered HR, leading to calmer responses. This research offers implications and recommendations for emotion modulation through UMH and advances the design of emotionally engaging multisensory feedback.

日本語まとめ
読み込み中…
読み込み中…
Rough Meanings: Cross-sensory correspondences linking surface textures with sound symbolism, colours, and emotions
説明

Surface textures play a critical role in shaping interaction with tangible and multisensory technologies, yet little is known about how their microstructural features influence cognitive and affective responses - factors central to interface design. We investigated this through cross-sensory correspondences of textures systematically varying in roundness and size. Thirty participants explored 3D-printed textures under visuo-tactile and tactile-only conditions, rating them on visuo-linguistic association, roughness, colour, and emotion. Rounded textures were often linked with Bouba and pleasantness, whereas pointed textures were associated with Kiki, higher arousal, and warmer colours. Visual access also influenced exploratory behaviour, reflected in applied normal force \del{and duration}. These findings demonstrate how microstructural tactile cues shape cross-sensory and affective associations. We propose cross-sensory correspondences as a methodological framework for designing microstructural features of surfaces textures, which could open up new design opportunities for pseudo-haptic feedback, texture-rich tangible interfaces, and coherent multisensory experiences in VR/AR.

日本語まとめ
読み込み中…
読み込み中…
"Our Secret Language": Co-Creating and Ritualizing Affective Haptics in Long-Distance Relationships
説明

Long-distance relationships (LDRs) struggle to sustain intimacy without physical touch. Existing mediated social touch systems rely on designer-authored haptic patterns, which limit opportunities for personalization and shared meaning-making. We present Onni, a haptic interface that lets couples collaboratively define and experience a shared library of haptic interactions. In Study 1, we conducted co-creation workshops (n=20) to examine how couples negotiate and align meanings in haptic interactions. In Study 2, we deployed Onni in everyday routines (n=6) to explore how these interactions are adopted, adapted, and ritualized. Our findings illustrate that couples co-create and personalize haptic interactions through continuous exploration, negotiation, and situational adaptation. By integrating a dyadic co-design approach, an end-user authoring interface for a shared action–feedback haptic repertoire, and a longitudinal view of how meanings evolve in everyday LDR routines, this work advances understanding of haptic meaning-making as a collaboratively constructed and ritualized process. It offers concrete design implications for building personalized, evolving haptic systems that support intimacy in LDRs.

日本語まとめ
読み込み中…
読み込み中…