ProjecTA: A Semi-Humanoid Robotic Teaching Assistant with In-Situ Projection for Guided Tours

要旨

Robotic teaching assistants (TAs) often use body-mounted screens to deliver content. In nomadic, walk-and-talk learning, such as tours in makerspaces, these screens can distract learners from real-world objects, increasing extraneous cognitive load. HCI research lacks empirical comparisons of potential alternatives, such as robots with in-situ projection versus screen-based counterparts; little knowledge has been derived for designing such alternatives. We introduce ProjecTA, a semi-humanoid, gesture-capable TA that guides learners while projecting near-object overlays coordinated with speech and gestures. In a mixed-method study (N=24) in a university makerspace, ProjecTA significantly reduced extraneous load and outperformed its screen-based counterpart in perceived usability, usefulness of visual display, and cross-modal complementarity. Qualitative analyses revealed how ProjecTA’s coordinated projections, gestures and speech anchored explanations in place and time, enhancing understanding in ways a screen could not. We derive key design implications for future robotic TAs leveraging spatial projection to support mobile learning in physical environments.

著者
Hanqing Zhou
Southern University of Science and Technology, Shenzhen, China
Yichuan Zhang
Southern University of Science and Technology , Shenzhen, China
Zihan Zhang
Southern University of Science and Technology, Shenzhen, China
Wei Zhang
Shenzhen University, Shenzhen, China
Chao Wang
Honda Research Institute Europe, Offenbach/Main, Germany
Pengcheng An
Southern University of Science and Technology, Shenzhen, China

会議: CHI 2026

ACM CHI Conference on Human Factors in Computing Systems

セッション: Living with Robots

M2 - Room M211/212
6 件の発表
2026-04-13 20:15:00
2026-04-13 21:45:00