Getting Emotional

会議の名前
CHI 2026
Mapping the Landscape of Affective Extended Reality: A Scoping Review of Biodata-Driven Systems for Understanding and Sharing Emotions
要旨

This paper introduces the notion of affective extended reality (XR) to characterise XR systems that use biodata to enable understanding of emotions. The HCI literature contains many such systems, but they have not yet been mapped into a coherent whole. To address this, we conducted a scoping review of 82 papers that explore the nexus of biodata, emotions, and XR. We analyse the technologies used in these systems, the interaction techniques employed, and the methods used to evaluate their effectiveness. Through our analysis, we contribute a mapping of the current landscape of affective XR, revealing diversity in the goals for enabling emotion sharing. We demonstrate how HCI researchers have explored the design of the interaction flows in XR biofeedback systems, highlighting key design dimensions and challenges in understanding emotions. We discuss underused approaches for emotion sharing and highlight opportunities for future research on affective XR.

受賞
Honorable Mention
著者
Zhidian Lin
RMIT University, Melbourne, Victoria, Australia
Allison Jing
RMIT University, Melbourne, Australia
Ziyuan Qu
RMIT University, Melbourne, Australia
Fabio Zambetta
RMIT University, Melbourne, Victoria, Australia
Ryan M.. Kelly
RMIT University, Melbourne, VIC, Australia
Prosocial AI Apologies on the Road: Emotional Compensation for Other Drivers' Misbehavior
要旨

Aggressive driving often triggers anger and retaliatory behaviors, posing threats to traffic safety. This paper proposes an AI-driven apology mechanism based on an Augmented Reality Head-Up Display (AR-HUD), which delivers immediate apologies on behalf of offending drivers during traffic conflicts and repairs damaged social relations through prosocial lies. We conducted a 2 (scenario risk: high vs. low) × 5 (apology depth) mixed-design experiment (N = 40) to evaluate its effectiveness. Results show that AI apologies enhanced positive emotions and forgiveness intentions while reducing anger, with participants also perceiving psychological benefits. These effects were consistent across both high- and low-risk scenarios. Our findings offer a practical design pathway for human-AI emotional regulation in traffic contexts.

著者
Jun Zhang
Hubei Institute of Fine Arts, Wuhan, China
Weiqi Mei
Wuhan University of Technology, School of Art and Design, Wuhan, China
Yuchen Wang
School of Art & Design,Guangdong University of Technology}, Guangzhou, China
Chang Guo
College of design and innovation, Tongji university, Shang Hai, China
WEIBO LING
Faculty of Applied Sciences, Macao SAR, China
Bo Liu
Shanghai Jiao Tong University, Shanghai, China
Qianwen Fu
Tongji University, Shanghai, China
Jie Zhang
Macao Polytechnic University, Macao, Macao, China
Fang You
Tongji University, Shanghai, China
Yan Luximon
The Hong Kong Polytechnic University, Kowloon, Hong Kong
FeelWave: Enabling Emotion-Aware Voice Interaction through Noise-Robust mmWave Emotion Sensing
要旨

Voice has been a primary interaction mode with LLM-powered assistants. Beyond semantics, voice carries emotional cues with potential to guide empathetic system responses. Yet, robust vocal emotion sensing in noise and its use in optimizing interactions remain underexplored. In response, we present FeelWave, which achieves empathetic voice interaction through noise-robust mmWave emotion sensing and structured LLM prompts. It extracts robust vocal information from mmWave signals, applies audio-to-mmWave transfer learning for efficient emotion recognition, and employs chain-of-thought-based query optimization to enable emotion-adaptive responses. Evaluations show that FeelWave achieves 92.3% emotion recognition accuracy and remains robust in noisy environments, yielding a 62.9 percentage-point gain over audio-based models. In voice interaction studies, 74.3% of users prefer FeelWave, reporting significantly higher satisfaction than a baseline without emotion sensing (4.37 vs. 3.22). A SUS score of 88.3 confirms FeelWave's high usability in real-world deployment. We hope this work will inspire more empathetic, user-centered AI-driven assistants.

著者
Lingyu Wang
University of Science and Technology of China (USTC), Hefei, Anhui, China
You Zuo
University of Science and Technology of China, Hefei, China
Dequan Wang
University of Science and Technology of China(USTC), HeFei, Anhui, China
Chenming He
University of Science and Technology of China, Hefei, China
Chengzhen Meng
University of Science and Technology of China, Hefei, Anhui, China
Xinran Zhang
University of Science and Technology of China (USTC), Hefei, Anhui, China
Xiaoran Fan
Independent Researcher, Sunnyvale, California, United States
Yanyong Zhang
University of Science and Technology of China (USTC), Hefei, Anhui, China
BioHaptics: Emotion Modulation via Biosignal-Inspired Ultrasonic Mid-air Haptics in Video-Watching Experiences
要旨

Ultrasonic mid-air haptics (UMH) offers a novel way to modulate affective responses through contactless, biosignal features inspired haptics (i.e., BioHaptics). Yet, the effects of different BioHaptics on affective responses of emotions experienced during video-watching context are unclear, limiting the flexible emotion modulation and broader use of UMH as a natural affective design channel in enhancing everyday media consumption. This paper explores how BioHaptics encoded from different biosignal features (e.g., heart rate (HR), heart rate variability (HRV), and respiration amplitude (RA)) impact emotions in video-watching contexts. In two experiments with 70 participants, we assessed affective responses while adding three BioHaptics during the video-watching processes. Results showed that HR and RA haptics promoted RA and HRV, with potential increments in emotional pleasantness and regulation; HRV haptics lowered HR, leading to calmer responses. This research offers implications and recommendations for emotion modulation through UMH and advances the design of emotionally engaging multisensory feedback.

著者
Zhouyang Shen
University college london, London, United Kingdom
Madhan Kumar Vasudevan
University College London, London, United Kingdom
Jing Xue
University College London, London, United Kingdom
Marianna Obrist
University College London, London, United Kingdom
Diego Martinez Plasencia
University College of London, London, United Kingdom
Rough Meanings: Cross-sensory correspondences linking surface textures with sound symbolism, colours, and emotions
要旨

Surface textures play a critical role in shaping interaction with tangible and multisensory technologies, yet little is known about how their microstructural features influence cognitive and affective responses - factors central to interface design. We investigated this through cross-sensory correspondences of textures systematically varying in roundness and size. Thirty participants explored 3D-printed textures under visuo-tactile and tactile-only conditions, rating them on visuo-linguistic association, roughness, colour, and emotion. Rounded textures were often linked with Bouba and pleasantness, whereas pointed textures were associated with Kiki, higher arousal, and warmer colours. Visual access also influenced exploratory behaviour, reflected in applied normal force \del{and duration}. These findings demonstrate how microstructural tactile cues shape cross-sensory and affective associations. We propose cross-sensory correspondences as a methodological framework for designing microstructural features of surfaces textures, which could open up new design opportunities for pseudo-haptic feedback, texture-rich tangible interfaces, and coherent multisensory experiences in VR/AR.

著者
Min Susan. Li
University of Bristol, Bristol, United Kingdom
Zhuzhi FAN
University of Bristol, Bristol, United Kingdom
Tegan Joy. Roberts-Morgan
University of Bristol, Bristol, United Kingdom
Amy Ingold
University of Bristol, Bristol, United Kingdom
Oussama Metatla
University of Bristol, Bristol, United Kingdom
"Our Secret Language": Co-Creating and Ritualizing Affective Haptics in Long-Distance Relationships
要旨

Long-distance relationships (LDRs) struggle to sustain intimacy without physical touch. Existing mediated social touch systems rely on designer-authored haptic patterns, which limit opportunities for personalization and shared meaning-making. We present Onni, a haptic interface that lets couples collaboratively define and experience a shared library of haptic interactions. In Study 1, we conducted co-creation workshops (n=20) to examine how couples negotiate and align meanings in haptic interactions. In Study 2, we deployed Onni in everyday routines (n=6) to explore how these interactions are adopted, adapted, and ritualized. Our findings illustrate that couples co-create and personalize haptic interactions through continuous exploration, negotiation, and situational adaptation. By integrating a dyadic co-design approach, an end-user authoring interface for a shared action–feedback haptic repertoire, and a longitudinal view of how meanings evolve in everyday LDR routines, this work advances understanding of haptic meaning-making as a collaboratively constructed and ritualized process. It offers concrete design implications for building personalized, evolving haptic systems that support intimacy in LDRs.

著者
Mengshi Yang
Tongji University, Shanghai, China
Tim Moesgen
Aalto University, Espoo, Finland
Ruochen Hu
Tsinghua University, Beijing, China
Yen Hang Zhou
Tsinghua University, Beijing, China
Zhining Li
Tsinghua University, Beijing, China
Min Hua
Shanghai Jiaotong University, Shanghai, China
Antti Salovaara
Aalto University, Espoo, Finland