Current tablet-based interfaces for drone operations often impose a heavy cognitive load on pilots and reduce situational awareness by dividing attention between the video feed and the real world. To address these challenges, we designed a heads-up augmented reality (AR) interface that overlays in-situ information to support drone pilots in safety-critical tasks. Through participatory design workshops with professional pilots, we identified key features and developed an adaptive AR interface that dynamically switches between task and safety views to prevent information overload. We evaluated our prototype by creating a realistic building inspection task and comparing three interfaces: a 2D tablet, a static AR, and our adaptive AR design. A user study with 15 participants showed that the AR interface improved access to safety information, while the adaptive AR interface reduced cognitive load and enhanced situational awareness without compromising task performance. We offer design insights for developing safety-first heads-up AR interfaces.
Olfactory stimuli have demonstrated the potential to evoke emotional depth and enhance user experiences in HCI. Yet, their role in shaping perceptions of social robots remains largely untapped. This study investigates how olfactory (scent) and auditory (voice) stimuli influence user perceptions of social robots. Using a 2x2 between-subjects design, participants interacted with a social robot under conditions with pleasant/unpleasant scents and friendly/unfriendly voices. The study measured perceived trust, friendliness, competence, and engagement. Our findings show that pleasant scents can enhance the perceptions of friendliness and engagement, while friendly voices can improve trust, friendliness, and engagement. The congruent combination of scents and voices affects friendliness and engagement but does not influence trust and competence. This study contributes to the growing work on multi-sensory Human-Robot Interaction (HRI) design, offering implications for creating more socially interactive robots.
Integrating curious behavior traits into robots is essential for them to learn and adapt to new tasks over their lifetime and to enhance human-robot interaction. However, the effects of robots expressing curiosity on user perception, user interaction, and user experience in collaborative tasks are unclear. In this work, we present a Multimodal Large Language Model-based system that equips a robot with non-verbal and verbal curiosity traits. We conducted a user study ($N=20$) to investigate how these traits modulate the robot's behavior and the users' impressions of sociability and quality of interaction. Participants prepared cocktails or pizzas with a robot, which was either curious or non-curious. Our results show that we could create user-centric curiosity, which users perceived as more human-like, inquisitive, and autonomous while resulting in a longer interaction time. We contribute a set of design recommendations allowing system designers to take advantage of curiosity in collaborative tasks.
Social robots are a class of emerging smart consumer electronics devices that promise sophisticated experiences featuring emotive capabilities, artificial intelligence, conversational interaction, and more. With unique risk factors like emotional attachment, little is known on how social robots communicate these promises to consumers and whether they adequately deliver upon them within their overall product experiences prior to and during user interaction.
Animated by a consumer protection lens, this paper systematically investigates manufacturer claims made for four commercially available social robots, evaluating these claims against the provided user experience and consumer reviews.
We find that social robots vary widely in the manner and extent to which they communicate intelligent features and the supposed benefits of these features, while consumer perspectives similarly include a wide range of perceptions on robot and AI performance, capabilities, and product frustrations.We conclude by discussing social robots' unique characteristics and propensities for consumer risk, and consider implications for key stakeholders like regulators, developers, and researchers of social robots.
Supernumerary robotic limbs are robotic structures integrated closely with the user's body, which augment human physical capabilities and necessitate seamless, naturalistic human-machine interaction. For effective assistance in physical tasks, enabling SRLs to hand over objects to humans is crucial. Yet, designing heuristic-based policies for robots is time-consuming, difficult to generalize across tasks, and results in less human-like motion. When trained with proper datasets, generative models are powerful alternatives for creating naturalistic handover motions. We introduce 3HANDS, a novel dataset of object handover interactions between a participant performing a daily activity and another participant enacting a hip-mounted SRL in a naturalistic manner. 3HANDS captures the unique characteristics of SRL interactions: operating in intimate personal space with asymmetric object origins, implicit motion synchronization, and the user’s engagement in a primary task during the handover. To demonstrate the effectiveness of our dataset, we present three models: one that generates naturalistic handover trajectories, another that determines the appropriate handover endpoints, and a third that predicts the moment to initiate a handover. In a user study (N=10), we compare the handover interaction performed with our method compared to a baseline. The findings show that our method was perceived as significantly more natural, less physically demanding, and more comfortable.
In this work, we introduce a formal design approach derived from the performing arts to design robot group movement. In our first experiment, we worked with trained actors and professional performers in a participatory design approach to identify common group movement patterns. In a follow-up studio work, we identified twelve common group movement patterns, transposed them into a performance script, built a scale model to support the performance process, and evaluated the patterns with a senior actor under studio conditions. We evaluated our refined models with 20 volunteers in a user study in the third experiment. Results from our affective circumplex modelling suggest that the patterns elicit positive emotional responses from the users. Also, participants performed better than chance in identifying the motion patterns without prior training. Based on our results, we propose design guidelines for social robots’ behaviour and movement design to improve their overall comprehensibility in interaction.
As social service robots become commonplace, it is essential for them to effectively interpret human signals, such as verbal, gesture, and eye gaze, when people need to focus on their primary tasks to minimize interruptions and distractions. Toward such a socially acceptable Human-Robot Interaction, we conducted a study (N=24) in an AR-simulated context of a coffee chat. Participants elicited social cues to signal intentions to an anthropomorphic, zoomorphic, grounded technical, or aerial technical robot waiter when they were speakers or listeners. Our findings reveal common patterns of social cues over intentions, the effects of robot morphology on social cue position and conversational role on social cue complexity, and users' rationale in choosing social cues. We offer insights into understanding social cues concerning perceptions of robots, cognitive load, and social context. Additionally, we discuss design considerations on approaching, social cue recognition, and response strategies for future service robots.