Human-Robot Interaction & Embodied Sensing

会議の名前
CHI 2026
Don't Worry, Just Follow Me: Prototyping and In-the-Wild Evaluation of Smart Pole Interaction Unit with Mobility
要旨

Pedestrian–automated vehicle(AV) encounters in shared spaces often involve hesitation and ambiguity. Vehicle-mounted external human–machine interfaces(eHMIs) can help, but obscured or poorly timed communications create significant challenges. To address this, we present a mobile smart pole interaction unit(SPIU) with integrated cameras and LED displays, designed as a pedestrian-side system to deliver explicit cues(``WALK,'' ``STOP''). An in-the-wild evaluation of the SPIU(N=21) using a four-factor analysis (CarBehavior, Mobility, eHMI, SPIU) showed that the SPIU improved understandability, trust, and perceived safety, and reduced workload compared with the baseline, with a combination(eHMI+SPIU) yielding the strongest results. Beyond these quantitative benefits, participants appreciated the mobility of the SPIU for its ``clear'' and ``easy to decide'' mediation. This work contributes to(1) a design and deployment framework for a mobile SPIU and(2) an in-the-wild evaluation protocol for pedestrian–AV interactions in nonsignalized spaces. Our work sparks discussions on real world evaluations involving detailed vehicle kinematics and accessible multimodality(e.g., audio), focusing on the role of personal robots as user-side eHMIs.

受賞
Honorable Mention
著者
Vishal Chauhan
The University of Tokyo, Bunkyo, Tokyo, Japan
Anubhav Anubhav
The University of Tokyo, Tokyo, Japan
Mark Colley
UCL Interaction Centre, London, United Kingdom
Chia-Ming Chang
National Taiwan University of Arts, Taipei, Taiwan
Xinyue Gui
The University of Tokyo, Tokyo, Japan
Ding Xia
The University of Tokyo, Tokyo, Japan
Ehsan Javanmardi
The University of Tokyo, Tokyo, Japan
Takeo Igarashi
The University of Tokyo, Tokyo, Japan
Kantaro Fujiwara
University of Tokyo, Tokyo, Japan
Manabu Tsukada
The University of Tokyo, Tokyo, Japan
Roaming with a Robot: Analyzing the Experiences and Understanding the Dimensions of Designing Human-Robot Walking Interactions
要旨

Walking is an essential aspect of daily life, while walking with companions offers numerous benefits. Recently developed mobile robots, through their ability to navigate challenging terrains, open new possibilities for outdoor walking companionship. Yet, little is known about how such companions shape the human walking experience. In this study, nine participants walked outdoors with a robot and later reflected on their walking experience in semi-structured interviews. Thematic analysis showed that the robot influenced how participants related to it, how they managed proximity, and how their attention, control, and social presence were affected. Building on these insights, we identify five key dimensions of human–robot walking: attunement, awareness mediation, proxemics, social perception, and playful curiosity. These dimensions capture how walking with robots transforms this ordinary activity into a co-experienced practice and additionally offer concrete design implications for designing and creating more meaningful, comfortable, and socially attuned human-robot walking interactions.

著者
Eshtiak Ahmed
Tampere University, Tampere, Finland
Çağlar Genç
Tampere University, Tampere, Finland
Velvet Spors
University of Tartu, Tartu, Estonia
Juho Hamari
Tampere University, Tampere, Finland
Oğuz ‘Oz’ Buruk
Tampere University, Tampere, Finland
Peeking Ahead of the Field Study: Exploring VLM Personas as Support Tools for Embodied Studies in HCI
要旨

Field studies are irreplaceable but costly, time-consuming, and error-prone, which need careful preparation. Inspired by rapid-prototyping in manufacturing, we propose a fast, low-cost evaluation method using Vision-Language Model (VLM) personas to simulate outcomes comparable to field results. While LLMs show human-like reasoning and language capabilities, autonomous vehicle (AV)-pedestrian interaction requires spatial awareness, emotional empathy, and behavioral generation. This raises our research question: To what extent can VLM personas mimic human responses in field studies? We conducted parallel studies: 1) one real-world study with 20 participants, and 2) one video-study using 20 VLM personas, both on a street-crossing task. We compared their responses and interviewed five HCI researchers on potential applications. Results show that VLM personas mimic human response patterns (e.g., average crossing times of 5.25 s vs. 5.07 s) lack the behavioral variability and depth. They show promise for formative studies, field study preparation, and human data augmentation.

受賞
Honorable Mention
著者
Xinyue Gui
The University of Tokyo, Tokyo, Japan
Ding Xia
The University of Tokyo, Tokyo, Japan
Mark Colley
UCL Interaction Centre, London, United Kingdom
Yuan Li
Keio University, Fujisawa-shi, Japan
Vishal Chauhan
The University of Tokyo, Bunkyo, Tokyo, Japan
Anubhav Anubhav
The University of Tokyo, Tokyo, Japan
Zhongyi Zhou
Google, Tokyo, Japan
Ehsan Javanmardi
The University of Tokyo, Tokyo, Japan
Stela Hanbyeol. Seo
Kyoto University, Kyoto, Kyoto, Japan
Chia-Ming Chang
National Taiwan University of Arts, Taipei, Taiwan
Manabu Tsukada
The University of Tokyo, Tokyo, Japan
Takeo Igarashi
The University of Tokyo, Tokyo, Japan
動画
“Too Crowded for a Robot?”: Modeling Human Acceptance Criteria for Elevator-Riding Robots
要旨

Robots are increasingly expected to share elevators with people, yet little is known about the conditions shaping acceptance. We introduce the Robot Boarding Area (RBA)—a designated entry zone for robots—and examine how its availability and congestion affect user evaluations. In an online survey, acceptance sharply decreased once the RBA was occupied by any person or large object, even under moderate crowding. A VR experiment confirmed this pattern and further showed that participants preferred when robots refrained from boarding in crowded conditions compared to forcing entry. By formalizing the RBA as an acceptance criterion and demonstrating the value of adaptive skip strategies, this work identifies spatial availability and boarding behavior as central to socially acceptable robot deployment in elevators.

著者
Seoktae Kim
NAVER LABS, Seongnam, Korea, Republic of
Sangyoung Cho
NAVER LABS, Seongnam, Korea, Republic of
Kahyeon Kim
NAVER LABS, Seongnam, Korea, Republic of
Sure Bak
NAVER LABS, Seongnam, Korea, Republic of
RoboHaptics: Designing Haptic Interactions for Lower Body with Quadruped Robot Dogs
要旨

Extended Reality (XR) has advanced audiovisual immersion, yet haptic feedback, especially for the lower body, remains limited. We present RoboHaptics, the first system to explore lower-body haptics using commercial quadruped robots. RoboHaptics leverages its freestanding and reconfigurable nature to deliver both active and passive tactile feedback without requiring worn devices. We contribute a design space for quadruped-mediated haptics, a software toolkit with a programmable library of tactile effects, and empirical evaluations showing that quadruped robots can deliver safe force feedback from 3–28 N (below nociceptive thresholds), reach lower-body locations with precision of 2.1–5.1 mm and accuracy of 3.7–17 mm, and support a wide range of tactile effects. A 12-participant study further revealed significant inputs into how RoboHaptics can be used to increase realism and immersion by providing lower-body haptic feedback. Together, our work establishes quadruped robots as a versatile platform for mobile, off-body haptics in XR.

著者
Huanjun Zhao
University of Calgary, Calgary, Alberta, Canada
Matthew James. Newton
University of Calgary, Calgary, Alberta, Canada
Sutirtha Roy
University of Calgary, Calgary, Alberta, Canada
Aditya Shekhar. Nittala
University of Calgary, Calgary, Alberta, Canada
Every Move You Make: Visualizing Near-Future Motion Under Delay for Telerobotics
要旨

Delays in direct teleoperation decouple operator input from robot feedback. We frame this not as a unitary problem but as three facets of operator uncertainty: (1) communication, when commands take effect, (2) trajectory, how inputs map to motion, and (3) environmental, how external factors alter outcomes. We externalized each facet through predictive visualizations: Network, Path, and Envelope. In a controlled study with 24 participants (novices in telerobotics) navigating a simulated robot under a fixed 2.56 s round-trip delay, we compared these visualizations against a delayed-video baseline. Path significantly shortened task time, lowered perceived cognitive load, and reduced reliance on reactive "move-and-wait" behavior. Envelope lowered cognitive load but did not significantly reduce reactive behavior or improve performance, while Network had no measurable effect. These results indicate that predictive support is effective only when trajectory uncertainty is externalized, enabling operators to move from reactive to more proactive control.

著者
Dries Cardinaels
UHasselt - Flanders Make, Diepenbeek, Belgium
Raf Ramakers
UHasselt - Flanders Make, Diepenbeek, Belgium
Tom Veuskens
UHasselt - Flanders Make, Diepenbeek, Belgium
Thomas Pietrzak
Univ. Lille, CNRS, Inria, Centrale Lille, UMR 9189 CRIStAL, Lille, France
Gustavo Alberto. Rovelo Ruiz
UHasselt - Flanders Make, Diepenbeek, Belgium
Kris Luyten
UHasselt - Flanders Make, Diepenbeek, Belgium
動画
When a Robot Communicates Through Air: Contextual Interpretations of Wind and Olfactory Cues
要旨

This study explores how nonverbal, sensory cues—olfactory and wind—can serve as subtle channels for behavioral guidance in mobile human-robot interaction. As multimodal interaction becomes increasingly integral to HRI, implicit communication remains underexplored, particularly through non-visual and non-auditory modalities. To address this gap, we conducted a Wizard-of-Oz study with 35 participants who experienced three types of stimuli—strong wind, weak wind, and olfactory cues—across six contextual scenarios. Our findings show that such sensory cues can induce affective interpretations ranging from support to surveillance, depending on the context. Olfactory cues generally evoked more positive impressions and a greater sense of care than wind, while wind cues were perceived as more directive and intrusive in comparison. These results suggest that scent and wind offer promising potential as ambient, affective, and non-intrusive notification channels for future human-robot interaction systems.

著者
Chaeeun Noh
Chungnam National University, Daejeon, Korea, Republic of
Jaejeung Kim
Chungnam National University, Daejeon, Korea, Republic of