この勉強会は終了しました。ご参加ありがとうございました。
Robots are embodied agents that act under several sources of uncertainty. When assisting humans in a collaborative task, robots need to communicate their uncertainty to help inform decisions. In this study, we examine the use of visualising a robot’s uncertainty in a high-stakes assisted decision-making task. In particular, we explore how different modalities of uncertainty visualisations (graphical display vs. the robot’s embodied behaviour) and confidence levels (low, high, 100%) conveyed by a robot affect the human decision-making and perception during a collaborative task. Our results show that these visualisations significantly impact how participants arrive to their decision as well as how they perceive the robot’s transparency across the different confidence levels. We highlight potential trade-offs and offer implications for robot-assisted decision-making. Our work contributes empirical insights on how humans make use of uncertainty visualisations conveyed by a robot in a critical robot-assisted decision-making scenario.
The introduction of robots in public spaces raises many questions concerning emergent interactions with robots. In this paper, we use video analysis to study two robotic trashcans deployed in a busy city square. We focus on the movement-based practices that emerged between the robot, the robot operators, and the inhabitants of the square. These practices spanned ways of attracting the robot and disposing of trash, the robot 'asking' for trash, 'demonstrations' by those in the square, as well as passersby in the square navigating around and in coordination with the robots. In discussion, we document these 'spontaneous simple sequential systematics' - interactions that were systematic (they had an order), sequential (they had parts that happened one at a time), simple (in that they could be understood and copied by an observer) and spontaneous (they could be produced with no prompting or training). Building on this we discuss how we might think of robotic motion as a design space, along with HCI contributions to urban robotics.
Teleoperating social robots requires operators to ``speak as the robot,'' as local users would favor robots whose appearance and voice match.
This study focuses on real-time altered auditory feedback (AAF), a method to transform the acoustic traits of one's speech and provide feedback to the speaker, to transform the operator's self-representation toward ``becoming the robot.''
To explore whether AAF with voice transformation (VT) matched to the robot's appearance can influence the operator's self-representation and ease the task, we experimented with three conditions: no VT (No-VT), only VT (VT-only), and VT with AAF (VT-AAF), where participants teleoperated a robot to verbally serve real passersby at a bakery.
The questionnaire results demonstrate that VT-AAF changed the participants' self-representation to match the robot's character and improved participants' subjective teleoperating experience, while task performance and implicit measures of self-representation were not significantly affected.
Notably, 87\% of the participants preferred VT-AAF the most.
A typical open-plan office layout is unable to optimally host multiple collocated work activities, personal needs, and situational events, as its space exerts a range of environmental demands on workers in terms of maintaining their acoustic, visual or privacy comfort. As we hypothesise that these demands could be coped by optimising the environmental resources of the architectural layout, we deployed a mobile robotic partition that autonomously manoeuvres between predetermined locations. During a five-weeks in-the-wild study within a real-world open-plan office, we studied how 13 workers adopted four distinct adaptation strategies when sharing the spatiotemporal control of the robotic partition. Based on their logged and self-reported reasoning, we present six initiation regulating factors that determine the appropriateness of each adaptation strategy. This study thus contributes to how future human-building interaction could autonomously improve the experience, comfort, performance, and even the health and wellbeing of multiple workers that share the same workplace.
As intelligent agents transition from controlled to uncontrolled environments, they face challenges that sometimes exceed their operational capabilities. In many scenarios, they rely on assistance from bystanders to overcome those challenges.
Using robots that get stuck in urban settings as an example, we investigate how agents can prompt bystanders into providing assistance. We conducted four focus group sessions with 17 participants that involved bodystorming, where participants assumed the role of robots and bystander pedestrians in role-playing activities. Generating insights from both assumed robot and bystander perspectives, we were able to identify potential non-verbal help-seeking strategies (i.e., addressing bystanders, cueing intentions, and displaying emotions) and factors shaping the assistive behaviours of bystanders.
Drawing on these findings, we offer design considerations for help-seeking urban robots and other agents operating in uncontrolled environments to foster casual collaboration, encompass expressiveness, align with agent social categories, and curate appropriate incentives.