Robot-supported interventions for joint attention (JA) in autistic children have shown encouraging outcomes, yet most remain confined to stationary robots in indoor settings, limiting opportunities for skill generalization and broader developmental benefits. We introduce an intervention that employs a quadruped robot dog as a peer-like partner for JA training across both indoor and outdoor environments. In this intervention, the robot dog directs children's attention to distributed targets in the environment and initiates JA trials. A four-week pre-post exploratory study with six autistic children demonstrated improvements in JA performance and indications of transfer to daily social communication. Spontaneous behaviors such as motor imitation (crawling) and novel social interactions with the robot also emerged, suggesting potential for broader developmental gains. These findings provide initial evidence for the efficacy of mobile robot-supported JA interventions in naturalistic contexts and offer implications for future design.
Spatial tactile feedback can enhance the realism of geometry exploration in virtual reality applications. Current vibrotactile approaches often face challenges with the spatial and temporal resolution needed to render different 3D geometries. Inspired by the natural deformation of finger pads when exploring 3D objects and surfaces, we propose TactDeform, a parametric approach to render spatio-temporal tactile patterns using a finger-worn electro-tactile interface. The system dynamically renders electro-tactile patterns based on both interaction contexts (approaching, contact, and sliding) and geometric contexts (geometric features and textures), emulating deformations that occur during real-world touch exploration. Results from a user study \rr{(N=24)} show that the proposed approach enabled high texture discrimination and geometric feature identification compared to a baseline. Informed by results from a free 3D-geometry exploration phase, we provide insights that can inform future tactile interface designs.
More scientists are now using AI, but prior studies have examined only how they use it `at the desk' for computer-based work. However, given that scientific work often happens `beyond the desk’ at lab and field sites, we conducted the first study of how \rev{scientific practitioners} use AI for embodied physical tasks. We interviewed 12 \rev{scientific practitioners doing hands-on lab and fieldwork} in domains like nuclear fusion, primate cognition, and biochemistry, and found three barriers to AI adoption in these settings: 1) experimental setups are too high-stakes to risk AI errors, 2) constrained environments make it hard to use AI, and 3) AI cannot match the tacit knowledge of humans. Participants then developed speculative designs for future AI assistants to 1) monitor task status, 2) organize lab-wide knowledge, 3) monitor scientists’ health, 4) do field scouting, 5) do hands-on chores. Our findings point toward AI as background infrastructure to support physical work rather than replacing human expertise.
Designing affective behaviors for animal-inspired social robots often relies on intuition and personal experience, leading to fragmented outcomes. To provide more systematic guidance, we first coded and analyzed human–pet interaction videos, validated insights through literature and interviews, and created structured reference cards that map the design space of pet-inspired affective interactions. Building on this, we developed MojiKit, a toolkit combining reference cards, a zoomorphic robot prototype (MomoBot), and a behavior control studio. We evaluated MojiKit in co-creation workshops with 18 participants, finding that MojiKit helped them design 35 affective interaction patterns beyond their own pet experiences, while the code-free studio lowered the technical barrier and enhanced creative agency. Our contributions include the data-informed structured resource for pet-inspired affective HRI design, an integrated toolkit that bridges reference materials with hands-on prototyping, and empirical evidence showing how MojiKit empowers users to systematically create richer, more diverse affective robot behaviors.
Advances in large language models (LLMs) are profoundly reshaping the field of human–robot interaction (HRI). While prior work has highlighted the technical potential of LLMs, few studies have systematically examined their human-centered impact (e.g., human-oriented understanding, user modeling, and levels of autonomy), making it difficult to consolidate emerging challenges in LLM-driven HRI systems. Therefore, we conducted a systematic literature search following the PRISMA guideline, identifying 86 articles that met our inclusion criteria. Our findings reveal that: (1) LLMs are transforming the fundamentals of HRI by reshaping how robots sense context, generate socially grounded interactions, and maintain continuous alignment with human needs in embodied settings; and (2) current research is largely exploratory, with different studies focusing on different facets of LLM-driven HRI, resulting in wide-ranging choices of experimental setups, study methods, and evaluation metrics. Finally, we identify key design considerations and challenges, offering a coherent overview and guidelines for future research at the intersection of LLMs and HRI.
Fine-grained hand interaction with Inertial Measurement Unit (IMU) and machine learning offers a low-cost and effective solution. However, the robustness and generalizability of machine learning models are highly dataset-dependent. Existing datasets for interaction design are typically constructed through extensive real user data collection, which limits interaction diversity and personalization. To address these challenges, we propose Rob2HanD, a novel data-generation tool which utilizes large language models (LLMs) to regulate the motion processes of the robotic arm and rapidly constructs IMU datasets. Rob2HanD demonstrates the capability to generate large and usable IMU interaction datasets under few-shot or zero-shot conditions, thereby enhancing the potential for diverse and personalized fine-grained hand interactions. Using a real human dataset, we evaluate machine learning models trained on Rob2HanD-generated data and validate the usability of Rob2HanD. In real-world applications, models trained on Rob2HanD-generated datasets demonstrate strong performance across a variety of customized interaction tasks.
Participatory design effectively engages stakeholders in technology development but is often constrained by small, resource-intensive activities. This study explores a scalable complementary method, enabling broad pattern identification in the design for interfaces in autonomous vehicles. We implemented a human-centered, iterative process that combined crowd creativity, structured participatory principles, and expert feedback. Across iterations, participant concepts evolved from simple cues to multimodal systems. Novel suggestions ranged from personalized features, like tracking lights, to inclusive elements like haptic feedback, progressively refining designs toward greater contextual awareness. To assess outcomes, we compared representative designs: a popular-design, reflecting the most frequently proposed ideas, and an innovative-design, merging participant innovations with expert input. Both were evaluated against a benchmark through video-based simulations. Results show that the popular-design outperformed the alternatives on both interpretability and user experience, with expert-validated innovations performing second best. These findings highlight the potential of scalable participatory methods for shaping emerging technologies.