Spatial tensions in real-world deployments of autonomous robots (e.g., sidewalk conflicts, boundary violations) expose a critical oversight: the neglect of space as a social construct through which people form expectations and regulate access and behavior, that is, Territoriality. Beyond proxemics, Human–Robot Interaction lacks the theoretical models and shared vocabulary needed to support empirical research on this dimension. To address this gap, we adapt insights from environmental psychology to develop NOX (ENtry, Occupancy, EXit), a stage-based model of human–robot territorial dynamics. NOX pinpoints sources of robot territorial infringement (i.e., friction points) which were validated in a between-subjects vignette study (N = 290). Our findings indicate that mismatches between robot behavior and human expectations at these friction points are associated with more negative affect, higher defensive intent, and lower perceived appropriateness. NOX clarifies this facet of human--robot spatial relationship and identifies future directions for design and research toward harmonious integration in human environments.
Robotic teaching assistants (TAs) often use body-mounted screens to deliver content. In nomadic, walk-and-talk learning, such as tours in makerspaces, these screens can distract learners from real-world objects, increasing extraneous cognitive load. HCI research lacks empirical comparisons of potential alternatives, such as robots with in-situ projection versus screen-based counterparts; little knowledge has been derived for designing such alternatives. We introduce ProjecTA, a semi-humanoid, gesture-capable TA that guides learners while projecting near-object overlays coordinated with speech and gestures. In a mixed-method study (N=24) in a university makerspace, ProjecTA significantly reduced extraneous load and outperformed its screen-based counterpart in perceived usability, usefulness of visual display, and cross-modal complementarity. Qualitative analyses revealed how ProjecTA’s coordinated projections, gestures and speech anchored explanations in place and time, enhancing understanding in ways a screen could not. We derive key design implications for future robotic TAs leveraging spatial projection to support mobile learning in physical environments.
Meaningful connections formed between people and robots are a key factor in sustaining long-term interaction. Yet while onboarding experiences for social robot products are often carefully designed to cultivate these bonds, offboarding receives far less attention. This imbalance can result in abrupt disruptions in human-robot bonds when products reach end-of-life. In this paper, we examine a case study describing the shutdown of Moxie, a social robot designed to support children's socio-emotional learning. Through a qualitative analysis of the company’s public communications and users’ online reactions to the shutdown, we identify key missed opportunities to prepare and support users throughout the robot’s final interactions. In the absence of a structured offboarding experience, the emotional, technical, and communicative burdens were shifted to parents. Drawing from these findings, we introduce ethical sunsetting recommendations for social robots and offer a reimagined offboarding experience aimed at supporting healthy emotional detachment during product end-of-life.
A line of research in HCI and HRI has started to consider robot failures, errors, and breakdowns not as problems to be eliminated, but as opportunities to inform and enrich design. This shift has led to growing interest in how robotic fallibility affects user trust, interaction quality, and system acceptance. In this paper, we inquire into what it means to design with fallibility. Drawing on feminist technoscience, we examine how current approaches frame the roles of designers and users (agency), how research methods shape the phenomena they study (performativity), and how underlying research goals carry ethical and epistemological implications (motivation). In recognizing robotic fallibility as a sociotechnical phenomenon and design research as a world-making practice, we provide design considerations that promote more reflexive, inclusive, and politically aware engagements with (robot) failure in HRI and HCI.
The emergence of embodied intelligence is expending the landscape of human-robot interaction (HRI) to include more direct and physical contact. While robot touch can provide assistance or comfort, a lack of Perceived Transparency before the touch, meaning limited clarity of the robot’s intentions, can lead to user confusion and anxiety. Despite its importance for user experience, perceived transparency towards robot's pre-touch conveyance method remains underexplored. This study systematically investigates how touch information conveyance affects perceived transparency and safety. Informed by a 340-person survey, we conducted a video-based study with 41 participants, comparing nine different robot pre-touch cues. Our mixed-methods approach combined subjective ratings and interviews with objective measures such as eye-tracking. We found that greater perceived transparency significantly enhances perceived safety. Video Displays were most effective at improving clarity, while task-oriented touch was more readily accepted than emotion-oriented touch. Based on these findings, we propose evidence-based design guidelines for safer and more effective robot touch interaction.
What values do technologists cite when evaluating technologies intended for application in "real world" social contexts like domestic settings? This paper examines the negotiation of values among the organizers who design and run the world’s largest domestic service robotics competition, RoboCup@Home. We perform an interpretive analysis of collaborative discussions from the organization's open GitHub repositories and meeting notes from 2015-2023, informed by participatory digital ethnography and on-site fieldwork. Our analysis reveals the pervasive invocation of values, such as "realistic" and antivalues, such as "unfair," in these discussions. We find that the perception of infeasibility strongly discouraged adoption of proposals that organizers otherwise agreed would have made RoboCup@Home, and domestic service robotics, more realistic, natural, and fair. We suggest future work pay attention to polysemy in negotiation of values, trace the shared values yet unrealized in negotiated settings, and consider infrastructural interventions that expand the feasibility of realizing values. The "real worlds" and values negotiated within spaces of competitive evaluation, be they imagined, realized, or unrealized, nonetheless shape the sociotechnical realities we and our technologies come to create and inhabit.