With AI on the boom, DeepFakes have emerged as a tool with a massive potential for abuse. The hyper-realistic imagery of these manipulated videos coupled with the expedited delivery models of social media platforms gives deception, propaganda, and disinformation an entirely new meaning. Hence, raising awareness about DeepFakes and how to accurately flag them has become imperative. However, given differences in human cognition and perception, this is not straightforward. In this paper, we perform an investigative user study and also analyze existing AI detection algorithms from the literature to demystify the unknowns that are at play behind the scenes when detecting DeepFakes. Based on our findings, we design a customized training program to improve detection and evaluate on a treatment group of low-literate population, which is most vulnerable to DeepFakes. Our results suggest that, while DeepFakes are becoming imperceptible, contextualized education and training can help raise awareness and improve detection.
https://doi.org/10.1145/3411764.3445699
The ACM CHI Conference on Human Factors in Computing Systems (https://chi2021.acm.org/)