From Tool to Partner: Expressive Behaviors as the Bridge to Human-Robot Creative Collaboration
説明

Human–robot creative collaboration is often constrained by command–response paradigms that position robots as tools rather than partners. While expressive robotics has shown social values, its role and behaviors in shaping creative partnerships with humans remains underexplored. Therefore, we investigate how robots' expressive behaviors influence co-creative engagement. In a formative study with 5 participants, we identified design insights for users to perceive a robot arm's expressive behaviors. We then implemented these expressive behaviors and conducted a within-subject study with 18 participants, comparing functional-only and expressive conditions in figure drawing tasks. Results showed that expressive behaviors significantly enhanced human-robot collaboration where they shifted from viewing the robot as a tool to a partner with a stronger emotional connection and collaborative satisfaction. Our contributions include empirical evidence of partnership transformation and design insights for facilitating human-robot creative collaboration.

日本語まとめ
読み込み中…
読み込み中…
SRL Proxemics: Spatial Guidelines for Supernumerary Robotic Limbs in Near-Body Interactions
説明

Wearable supernumerary robotic limbs (SRLs) sit at the intersection of human augmentation and embodied AI, promising to function as extensions of the human body. However, their movements within the intimate near-body space raise unresolved challenges for perceived safety, user control, and trust. In this paper, we present results from a Wizard-of-Oz study (n=18), where participants completed near-body collaboration tasks with SRLs to explore these challenges. We collected qualitative data through think-aloud protocols and semi-structured interviews, complemented by physiological signals and post-task ratings. Findings indicate that greater autonomy did not inherently enhance perceived safety or trust. Instead, participants identified near-body zones and paired them with clear coordination rules. They also expressed expectations for how different arm components should behave, shaping preferences around autonomy, perceived safety, and trust. Building on these insights, we introduce SRL Proxemics, a zone- and segment-level design framework showing that autonomy is not monolithic: perceived safety hinges on spatially calibrated, legible behaviors, not on autonomy level alone.

日本語まとめ
読み込み中…
読み込み中…
Swarm UIs: Impact of Assistance on Users’ Sense of Agency
説明

Swarm UIs provide assistance to support users in their tasks and are increasingly explored in HCI. This paper studies the extent to which this assistance impacts users’ sense of agency. A reduced sense of agency can lead to non-use of the interface or a diminishing sense of responsibility regarding the consequences of users’ actions. We conduct three experiments studying the impact of three factors on the sense of agency: the level of assistance, the task difficulty, and the predictability of modules. Our nine assistance levels vary in system autonomy and module coordination (proxy vs. no proxy). We find that higher assistance reduces users’ sense of agency, and this effect is not impacted by task difficulty. Predictability only impacts the least assistive interaction techniques. Our results will foster users’ acceptance, responsibility, and use of swarm UIs.

日本語まとめ
読み込み中…
読み込み中…
One Body, Two Minds: Alternating VR Perspective During Remote Teleoperation of Supernumerary Limbs
説明

Remote VR teleoperation with supernumerary robotic limbs enables distant users to operate in another’s local space. While a shared first-person view aids hand-eye coordination, locking the guest’s camera to the host’s head can degrade comfort, embodiment, and coordination. Based on a formative study (N=10) using a virtual supernumerary robotic limbs configuration to stress-test coordination, we propose guest-driven perspective switching from a shared first-person baseline (Shared Embodied View) to two alternatives: (a) a stabilized view with guest-controlled rotation (Embedded Anchored View), and (b) a fully decoupled third-person view (Out-of-body View). We ran a user study with 24 pairs (N=48), who switched between the baseline and proposed views as task demands changed. We measured performance, embodiment, fatigue, physiological arousal, and switching behaviors. Our results reveal role-dependent trade-offs: Out-of-body View improves navigation efficiency and reduces errors, while Embedded Anchored View supports embodiment. We conclude with guidelines: use Embedded Anchored View for hand-centric adjustments, Out-of-body View for navigation and object placement, and ensure smooth transitions.

日本語まとめ
読み込み中…
読み込み中…
"Tech'' a Deep Breath: Technology-guided Breathing Practise With or Without a Social Robot for Psychological and Emotional Well-Being
説明

Most mental health conditions emerge during adolescence, making university years pivotal for intervention. Nearly 30% of students worldwide experience mental health difficulties, yet support remains constrained by stigma and limited resources. This work investigates how interactive technologies, integrated with a wearable heart-rate sensor, can enhance psychological and emotional well-being through the ancient yogic breathing practice, Nadi Shuddhi. We developed two autonomous, adaptive systems - one combining a social robot with a tablet, and one tablet-only, both delivering real-time guidance based on heart-rate variability and breath rate. A study involving 42 university students across 200 sessions revealed both systems significantly increased parasympathetic activation, mindfulness, and calmness, while reducing short-term stress and depression symptoms. Compared to a tablet-only condition, the robot's physical presence led to a significant decrease in breath rate, improved mood, higher competence, and more positive user perceptions, while usability remained comparable, highlighting its potential for supporting youth mental health through social robots with biofeedback.

日本語まとめ
読み込み中…
読み込み中…
Better Assumptions, Stronger Conclusions: The Case for Ordinal Regression in HCI
説明

Despite the widespread use of ordinal measures in HCI, such as Likert-items, there is little consensus among HCI researchers on the statistical methods used for analysing such data. Both parametric and non-parametric methods have been extensively used within the discipline, with limited reflection on their assumptions and appropriateness for such analyses. In this paper, we examine recent HCI works that report statistical analyses of ordinal measures. We highlight prevalent methods used, discuss their limitations and spotlight key assumptions and oversights that diminish the insights drawn from these methods. Finally, we champion and detail the use of cumulative link (mixed) models (CLM/CLMM) for analysing ordinal data. Further, we provide practical worked examples of applying CLM/CLMMs using R to published open-sourced datasets. This work contributes towards a better understanding of the statistical methods used to analyse ordinal data in HCI and helps to consolidate practices for future work.

日本語まとめ
読み込み中…
読み込み中…
Building Resilience in Human–Robot Collaboration: Affective and Cognitive Feedback from Robot for Human-Initiated Failure Handling
説明

Human–robot collaboration increasingly frames robots as teammates rather than tools, yet there is limited guidance on how robots should respond when failures are attributed to the human collaborator. We investigate how robot collaborators should respond to support collaboration experience after a human-attributed failure. In a 4 × 2 mixed factorial design (N = 60), participants completed a collaborative block-stacking task with either a humanoid robot (NAO) or a human collaborator under four scenarios: success, affective feedback, cognitive feedback, and no feedback. We measured collaboration experience in terms of teamwork quality, perceived copresence, and intimacy. Both affective and cognitive feedback improved these outcomes compared with no feedback: affective cues yielded the strongest socio-relational gains (copresence, intimacy), whereas cognitive cues more strongly enhanced perceived teamwork quality. These patterns were consistent across human–robot and human–human collaboration, indicating shared team-level expectations that extend beyond the individual actor. The results provide empirical evidence for socially adaptive robots that pair brief emotional reassurance with concrete guidance to support collaboration after human-attributed failures.

日本語まとめ
読み込み中…
読み込み中…