FakeForward: Using Deepfake Technology for Feedforward Learning

要旨

Videos are commonly used to support learning of new skills, to improve existing skills, and as a source of motivation for training. Video self-modelling (VSM) is a learning technique that improves performance and motivation by showing a user a video of themselves performing a skill at a level they have not yet achieved. Traditional VSM is very data and labour intensive: a lot of video footage needs to be collected and manually edited in order to create an effective self-modelling video. We address this by presenting FakeForward -- a method which uses deepfakes to create self-modelling videos from videos of other people. FakeForward turns videos of better-performing people into effective, personalised training tools by replacing their face with the user’s. We investigate how FakeForward can be effectively applied and demonstrate its efficacy in rapidly improving performance in physical exercises, as well as confidence and perceived competence in public speaking.

著者
Christopher Clarke
University of Bath, Bath, United Kingdom
Jingnan Xu
University of Bath, Bath, United Kingdom
Ye Zhu
University of Bath, Bath, United Kingdom
Karan Dharamshi
University of Bath, Bath, United Kingdom
Harry McGill
University of Bath, Bath, United Kingdom
Stephen Black
University of Bath, Bath, United Kingdom
Christof Lutteroth
University of Bath, Bath, United Kingdom
論文URL

https://doi.org/10.1145/3544548.3581100

動画

会議: CHI 2023

The ACM CHI Conference on Human Factors in Computing Systems (https://chi2023.acm.org/)

セッション: Technology-Powered Learning

Hall D
6 件の発表
2023-04-27 01:35:00
2023-04-27 03:00:00