The growing presence of AI-manipulated videos presents a significant challenge to the integrity of online information. This paper presents findings from an empirical study with 490 participants in the United States to provide a holistic view of public engagement with this threat. We structure our analysis around three key areas: (1) how demographics and media habits influence general perceptions of prevalence; (2) the factors shaping detection accuracy, the calibration of confidence level, and the perceptual cues people rely on when viewing in-the-wild videos; and (3) the verification actions people take following suspicion. We find that while the public views AI-manipulated media as prevalent, participants struggled to distinguish authentic and AI-manipulated videos, often exhibiting poorly calibrated confidence. Furthermore, users rarely utilize available detection tools. These patterns highlight the insufficiency of human detection ability and the need of new approaches to enable improved user awareness, successful interventions, and effective mitigation.
ACM CHI Conference on Human Factors in Computing Systems