Watch This! Observational Learning in VR Promotes Better Far Transfer than Active Learning for a Fine Psychomotor Task

要旨

Virtual Reality (VR) holds great potential for psychomotor training, with existing applications using almost exclusively a `learning-by-doing' active learning approach, despite the possible benefits of incorporating observational learning. We compared active learning (n=26) with different variations of observational learning in VR for a manual assembly task. For observational learning, we considered three levels of visual similarity between the demonstrator avatar and the user, dissimilar (n=25), minimally similar (n=26), or a self-avatar (n=25), as similarity has been shown to improve learning. Our results suggest observational learning can be effective in VR when combined with `hands-on' practice and can lead to better far skill transfer to real-world contexts that differ from the training context. Furthermore, we found self-similarity in observational learning can be counterproductive when focusing on a manual task, and skills decay quickly without further training. We discuss these findings and derive design recommendations for future VR training.

著者
Isabel Sophie. Fitton
University of Bath, Bath, United Kingdom
Elizabeth Dark
University of Bath, Bath, United Kingdom
Manoela Milena Oliveira da Silva
Federal University of Pernambuco, Recife, Brazil
Jeremy Dalton
PwC, Austin, Texas, United States
Michael J. Proulx
University of Bath, Bath, United Kingdom
Christopher Clarke
University of Bath, Bath, United Kingdom
Christof Lutteroth
University of Bath, Bath, United Kingdom
論文URL

doi.org/10.1145/3613904.3642550

動画

会議: CHI 2024

The ACM CHI Conference on Human Factors in Computing Systems (https://chi2024.acm.org/)

セッション: Movement and Motor Learning A

314
4 件の発表
2024-05-15 18:00:00
2024-05-15 19:20:00