Virtual environments can support psychomotor learning by allowing learners to observe instructor avatars. Instructor avatars that look like the learner hold promise in enhancing learning; however, it is unclear whether this works for psychomotor tasks and how similar avatars need to be. We investigated `minimal’ customisation of instructor avatars, approximating a learner’s appearance by matching only key visual features: gender, skin-tone, and hair colour. These avatars can be created easily and avoid problems of highly similar avatars. Using modern dancing as a skill to learn, we compared the effects of visually similar and dissimilar avatars, considering both learning on a screen (n=59) and in VR (n=38). Our results indicate that minimal avatar customisation leads to significantly more vivid visual imagery of the dance moves than dissimilar avatars. We analyse variables affecting interindividual differences, discuss the results in relation to theory, and derive design implications for psychomotor training in virtual environments.
https://doi.org/10.1145/3544548.3580944
The ACM CHI Conference on Human Factors in Computing Systems (https://chi2023.acm.org/)