EVE: Enabling Anyone to Train Robots using Augmented Reality

要旨

The increasing affordability of robot hardware is accelerating the integration of robots into everyday activities. However, training a robot to automate a task requires expensive trajectory data where a trained human annotator moves a physical robot to train it. Consequently, only those with access to robots produce demonstrations to train robots. In this work, we remove this restriction with EVE, an iOS app that enables everyday users to train robots using intuitive augmented reality visualizations, without needing a physical robot. With EVE, users can collect demonstrations by specifying waypoints with their hands, visually inspecting the environment for obstacles, modifying existing waypoints, and verifying collected trajectories. In a user study (N=14, D=30) consisting of three common tabletop tasks, EVE outperformed three state-of-the-art interfaces in success rate and was comparable to kinesthetic teaching—physically moving a physical robot—in completion time, usability, motion intent communication, enjoyment, and preference (mean of p=0.30). EVE allows users to train robots for personalized tasks, such as sorting desk supplies, organizing ingredients, or setting up board games. We conclude by enumerating limitations and design considerations for future AR-based demonstration collection systems for robotics.

著者
Jun Wang
University of Washington, Seattle, Washington, United States
Chun-Cheng Chang
University of Washington, Seattle, Washington, United States
Jiafei Duan
University of Washington, Seattle, Washington, United States
Dieter Fox
University of Washington, Seattle, Washington, United States
Ranjay Krishna
University of Washington, Seattle, Washington, United States
論文URL

https://doi.org/10.1145/3654777.3676413

動画

会議: UIST 2024

ACM Symposium on User Interface Software and Technology

セッション: 3. New Vizualizations

Westin: Allegheny 3
4 件の発表
2024-10-15 00:00:00
2024-10-15 01:00:00