Through the Looking-Glass: AI-Mediated Video Communication Reduces Trust and Confidence in Judgement

要旨

AI-based tools that mediate, enhance or generate parts of video communication may interfere with how people evaluate trustworthiness and credibility. In two preregistered online experiments (N = 2,000), we examined whether AI-mediated video retouching, background replacement and avatars affect interpersonal trust, people's ability to detect lies and confidence in their judgments. Participants watched short videos of speakers making truthful or deceptive statements across three conditions with varying levels of AI mediation. We observed that perceived trust and confidence in judgments declined in AI-mediated videos, particularly in settings in which some participants used avatars while others did not. However, participants' actual judgment accuracy remained unchanged, and they were no more inclined to suspect those using AI tools of lying. Our findings provide evidence against concerns that AI mediation undermines people's ability to distinguish truth from lies, and against cue-based accounts of lie detection more generally. They highlight the importance of trustworthy AI mediation tools in contexts where not only truth, but also trust and confidence matter.

著者
Nelson Navajas Fernández
Bauhaus-Universität Weimar, Weimar, Germany
Jeff Hancock
Stanford University, Stanford, California, United States
Maurice Jakesch
Bauhaus-Universität Weimar, Weimar, Germany

会議: CHI 2026

ACM CHI Conference on Human Factors in Computing Systems

セッション: Trust and Perception in AI Systems

P1 - Room 118
7 件の発表
2026-04-14 20:15:00
2026-04-14 21:45:00