AI-generated AR Reassembly Guidance from Disassembly Videos to Scaffold Everyday Repair

要旨

Repair is a valuable yet challenging activity, especially when product manuals are missing or outdated. Augmented Reality (AR) has been widely explored for repair tasks, but most systems rely on CAD models or pre-constructed assets, which escalate authoring costs and constrain scalability. We introduce RePairAR, a system that leverages multimodal large language models (MLLMs) to generate interactive AR reassembly guidance derived directly from user-recorded egocentric disassembly videos. RePairAR deduces step-part-relation structures, reverses these for reassembly planning, and delivers the guidance through mixed-media AR visualizations. In a user study with repair novices, RePairAR significantly reduced perceived temporal demand compared to traditional how-to videos. Both media improved self-efficacy, with RePairAR providing greater gains. Follow-up interviews revealed the mechanisms behind these effects. We contribute a validated MLLM-driven pipeline and highlight design implications for scalable, situated support in everyday repair practices.

著者
Wenjing Deng
Tsinghua University, Beijing, China
Zhihao Yao
Tsinghua University, Beijing, Beijing, China
Xinhui Kang
Tsinghua University, Beijing, China
Qirui Sun
Tsinghua University, Beijing, China
Xintong Wu
Tsinghua University, Beijing, China
Sisi He
Nanyang Institute of Technology, Nanyang City, Henan Province, China
Chenzhuo Xiang
Tsinghua University, Beijing, China
Haipeng Mi
Tsinghua University, Beijing, China

会議: CHI 2026

ACM CHI Conference on Human Factors in Computing Systems

セッション: Mindfulness, Breathing, and Biofeedback Technologies

P1 - Room 132
7 件の発表
2026-04-17 18:00:00
2026-04-17 19:30:00