Physical Tasks & Robots

会議の名前
CHI 2026
Take the Dog to the Park: Quadruped Robot for Joint Attention Training with Autistic Children in Naturalistic Settings
要旨

Robot-supported interventions for joint attention (JA) in autistic children have shown encouraging outcomes, yet most remain confined to stationary robots in indoor settings, limiting opportunities for skill generalization and broader developmental benefits. We introduce an intervention that employs a quadruped robot dog as a peer-like partner for JA training across both indoor and outdoor environments. In this intervention, the robot dog directs children's attention to distributed targets in the environment and initiates JA trials. A four-week pre-post exploratory study with six autistic children demonstrated improvements in JA performance and indications of transfer to daily social communication. Spontaneous behaviors such as motor imitation (crawling) and novel social interactions with the robot also emerged, suggesting potential for broader developmental gains. These findings provide initial evidence for the efficacy of mobile robot-supported JA interventions in naturalistic contexts and offer implications for future design.

受賞
Honorable Mention
著者
Yuyang Fang
Zhejiang University, Hangzhou, Zhejiang, China
Yi Hu
Zhejiang University, Hangzhou, Zhejiang, China
Jiayu Teng
Zhejiang University, Shanghai, China
Yu Cai
Zhejiang University, Hangzhou, China
Jiayang Liu
Zhejiang University, Hangzhou, Zhejiang, China
Feifan Xia
Imperial College London, London, United Kingdom
Yilin Tang
Zhejiang University, Hangzhou, Zhejiang, China
Liuqing Chen
Zhejiang University, Hangzhou, China
TactDeform: Finger Pad Deformation Inspired Spatial Tactile Feedback for Virtual Geometry Exploration
要旨

Spatial tactile feedback can enhance the realism of geometry exploration in virtual reality applications. Current vibrotactile approaches often face challenges with the spatial and temporal resolution needed to render different 3D geometries. Inspired by the natural deformation of finger pads when exploring 3D objects and surfaces, we propose TactDeform, a parametric approach to render spatio-temporal tactile patterns using a finger-worn electro-tactile interface. The system dynamically renders electro-tactile patterns based on both interaction contexts (approaching, contact, and sliding) and geometric contexts (geometric features and textures), emulating deformations that occur during real-world touch exploration. Results from a user study \rr{(N=24)} show that the proposed approach enabled high texture discrimination and geometric feature identification compared to a baseline. Informed by results from a free 3D-geometry exploration phase, we provide insights that can inform future tactile interface designs.

著者
Yihao Dong
The University of Sydney, Camperdown, NSW, Australia
Praneeth Bimsara. Perera
The University of Sydney, Sydney, NSW, Australia
Chin-Teng Lin
Australian AI Institute, School of Computer Science, Faculty of Engineering and Information Technology, University of Technology Sydney, Ultimo, Australia
Craig T. Jin
The University of Sydney, Sydney, NSW, Australia
Anusha Withana
The University of Sydney, Sydney, NSW, Australia
Beyond the Desk: Barriers and Future Opportunities for AI to Assist Scientists in Embodied Physical Tasks
要旨

More scientists are now using AI, but prior studies have examined only how they use it `at the desk' for computer-based work. However, given that scientific work often happens `beyond the desk’ at lab and field sites, we conducted the first study of how \rev{scientific practitioners} use AI for embodied physical tasks. We interviewed 12 \rev{scientific practitioners doing hands-on lab and fieldwork} in domains like nuclear fusion, primate cognition, and biochemistry, and found three barriers to AI adoption in these settings: 1) experimental setups are too high-stakes to risk AI errors, 2) constrained environments make it hard to use AI, and 3) AI cannot match the tacit knowledge of humans. Participants then developed speculative designs for future AI assistants to 1) monitor task status, 2) organize lab-wide knowledge, 3) monitor scientists’ health, 4) do field scouting, 5) do hands-on chores. Our findings point toward AI as background infrastructure to support physical work rather than replacing human expertise.

著者
Irene Hou
University of California, San Diego, San Diego, California, United States
Alexander Qin
University of California, San Diego, San Diego, California, United States
Lauren Cheng
University of California, San Diego, San Diego, California, United States
Philip Guo
UC San Diego, La Jolla, California, United States
From Pets to Robots: MojiKit as a Data-Informed Toolkit for Affective HRI Design
要旨

Designing affective behaviors for animal-inspired social robots often relies on intuition and personal experience, leading to fragmented outcomes. To provide more systematic guidance, we first coded and analyzed human–pet interaction videos, validated insights through literature and interviews, and created structured reference cards that map the design space of pet-inspired affective interactions. Building on this, we developed MojiKit, a toolkit combining reference cards, a zoomorphic robot prototype (MomoBot), and a behavior control studio. We evaluated MojiKit in co-creation workshops with 18 participants, finding that MojiKit helped them design 35 affective interaction patterns beyond their own pet experiences, while the code-free studio lowered the technical barrier and enhanced creative agency. Our contributions include the data-informed structured resource for pet-inspired affective HRI design, an integrated toolkit that bridges reference materials with hands-on prototyping, and empirical evidence showing how MojiKit empowers users to systematically create richer, more diverse affective robot behaviors.

著者
Liwen He
The Hong Kong University of Science and Technology (Guangzhou), Guangzhou, Guangdong, China
Pingting Chen
The Hong Kong University of Science and Technology (Guangzhou), Guangzhou, China
Ziheng Tang
The Hong Kong University of Science and Technology (Guangzhou), Guangzhou, Guangdong, China
Yixiao Liu
The Hong Kong University of Science and Technology (Guangzhou), Guangzhou, Guangdong, China
Jihong Jeung
The Future Laboratory, beijng, Haidian, beijing, China
Teng Han
Institute of Software, Chinese Academy of Sciences, Beijing, China
Xin Tong
The Hong Kong University of Science and Technology (Guangzhou), Guangzhou, China
How Do We Research Human-Robot Interaction in the Age of Large Language Models? A Systematic Review
要旨

Advances in large language models (LLMs) are profoundly reshaping the field of human–robot interaction (HRI). While prior work has highlighted the technical potential of LLMs, few studies have systematically examined their human-centered impact (e.g., human-oriented understanding, user modeling, and levels of autonomy), making it difficult to consolidate emerging challenges in LLM-driven HRI systems. Therefore, we conducted a systematic literature search following the PRISMA guideline, identifying 86 articles that met our inclusion criteria. Our findings reveal that: (1) LLMs are transforming the fundamentals of HRI by reshaping how robots sense context, generate socially grounded interactions, and maintain continuous alignment with human needs in embodied settings; and (2) current research is largely exploratory, with different studies focusing on different facets of LLM-driven HRI, resulting in wide-ranging choices of experimental setups, study methods, and evaluation metrics. Finally, we identify key design considerations and challenges, offering a coherent overview and guidelines for future research at the intersection of LLMs and HRI.

著者
Yufeng Wang
The Hong Kong University of Science and Technology (Guangzhou), Guangzhou, China
Yuan Xu
The Hong Kong University of Science and Technology (Guangzhou), Guangzhou, China
Anastasia Nikolova
The Hong Kong University of Science and Technology (Guangzhou), Guangzhou, China
yuxuan wang
Savannah College of Art and Design, Savannah, Georgia, United States
Jianyu Wang
Zhejiang University , Hangzhou , China
Chongyang Wang
Sichuan University, Chengdu, China
Xin Tong
The Hong Kong University of Science and Technology (Guangzhou), Guangzhou, China
Rob2HanD: LLM-Driven Robotic Arm for IMU Interaction Dataset Generation
要旨

Fine-grained hand interaction with Inertial Measurement Unit (IMU) and machine learning offers a low-cost and effective solution. However, the robustness and generalizability of machine learning models are highly dataset-dependent. Existing datasets for interaction design are typically constructed through extensive real user data collection, which limits interaction diversity and personalization. To address these challenges, we propose Rob2HanD, a novel data-generation tool which utilizes large language models (LLMs) to regulate the motion processes of the robotic arm and rapidly constructs IMU datasets. Rob2HanD demonstrates the capability to generate large and usable IMU interaction datasets under few-shot or zero-shot conditions, thereby enhancing the potential for diverse and personalized fine-grained hand interactions. Using a real human dataset, we evaluate machine learning models trained on Rob2HanD-generated data and validate the usability of Rob2HanD. In real-world applications, models trained on Rob2HanD-generated datasets demonstrate strong performance across a variety of customized interaction tasks.

著者
Jiangyuan Liu
Zhejiang University, Ningbo, China
Chicheng Yu
Zhejiang University, NINGBO, China
Xinli Chen
Zhejiang University, Hangzhou, China
Jiajun Bu
Zhejiang University, Hangzhou, Zhejiang, China
Limin Zeng
Zhejiang University, Hangzhou, China
A Collaborative Crowdsourcing Method for Designing External Interfaces for Autonomous Vehicles
要旨

Participatory design effectively engages stakeholders in technology development but is often constrained by small, resource-intensive activities. This study explores a scalable complementary method, enabling broad pattern identification in the design for interfaces in autonomous vehicles. We implemented a human-centered, iterative process that combined crowd creativity, structured participatory principles, and expert feedback. Across iterations, participant concepts evolved from simple cues to multimodal systems. Novel suggestions ranged from personalized features, like tracking lights, to inclusive elements like haptic feedback, progressively refining designs toward greater contextual awareness. To assess outcomes, we compared representative designs: a popular-design, reflecting the most frequently proposed ideas, and an innovative-design, merging participant innovations with expert input. Both were evaluated against a benchmark through video-based simulations. Results show that the popular-design outperformed the alternatives on both interpretability and user experience, with expert-validated innovations performing second best. These findings highlight the potential of scalable participatory methods for shaping emerging technologies.

著者
Ronald Cumbal
Uppsala University, Uppsala, Sweden
Marcus Göransson
Uppsala University, Uppsala, Sweden
Alexandros Rouchitsas
Uppsala University, Uppsala , Sweden
Didem Gürdür Broo
Uppsala University, Uppsala, Sweden
Ginevra Castellano
Uppsala University, Uppsala, Sweden