ARify: Leveraging Narrated Instructional Videos to Create Augmented Reality Tutorials for Procedural Tasks

要旨

Augmented Reality (AR) tutorials enhance procedural task learning by providing situated, step-by-step guidance. Yet, creating such tutorials requires AR authoring expertise, posing a significant entry barrier. To lower this barrier, we introduce ARify, an authoring system that semi-automatically transforms narrated instructional videos into AR tutorials. To guide system design, we conducted a content analysis of video tutorials and derived a design space of instructional intents, tactics, and AR representations. Building on this, ARify generates AR tutorials by integrating a vision–language model to plan tutorial structures and an AR builder to configure AR representations, and offers interfaces that allow users to refine and customize the results. A numerical study on three machine tasks and a user study with 18 participants showed that ARify achieves promising performance across task types, and allows novices to author effective AR tutorials, validating its effectiveness and usability.

著者
Xiyun Hu
Purdue University, West Lafayette , Indiana, United States
Chenfei Zhu
Purdue University, West Lafayette, Indiana, United States
Shao-Kang Hsia
Purdue University, West Lafayette, Indiana, United States
Dizhi Ma
Purdue University, West Lafayette, Indiana, United States
Rahul Jain
Purdue University, West Lafayette, Indiana, United States
Karthik Ramani
Purdue University, West Lafayette, Indiana, United States

会議: CHI 2026

ACM CHI Conference on Human Factors in Computing Systems

セッション: Interactive Systems for Teaching, Learning, and Concept Formation

P1 - Room 134
7 件の発表
2026-04-16 18:00:00
2026-04-16 19:30:00