Repair is a valuable yet challenging activity, especially when product manuals are missing or outdated. Augmented Reality (AR) has been widely explored for repair tasks, but most systems rely on CAD models or pre-constructed assets, which escalate authoring costs and constrain scalability. We introduce RePairAR, a system that leverages multimodal large language models (MLLMs) to generate interactive AR reassembly guidance derived directly from user-recorded egocentric disassembly videos. RePairAR deduces step-part-relation structures, reverses these for reassembly planning, and delivers the guidance through mixed-media AR visualizations. In a user study with repair novices, RePairAR significantly reduced perceived temporal demand compared to traditional how-to videos. Both media improved self-efficacy, with RePairAR providing greater gains. Follow-up interviews revealed the mechanisms behind these effects. We contribute a validated MLLM-driven pipeline and highlight design implications for scalable, situated support in everyday repair practices.
ACM CHI Conference on Human Factors in Computing Systems