Liars & Deepfakes

会議の名前
CHI 2026
Living Contracts: Beyond Document-Centric Interaction with Legal Agreements
要旨

User interaction with legal contracts has been limited to document reading, which is often complicated by complex, ambiguous legal language. We explore possible futures where contract interfaces go beyond single document interfaces to (1) educate users with legal rights not stated in the contract, (2) transform legal language into alternative representations to aid information tasks before, during, and after signing, and (3) proactively supply contractual information at relevant moments. We refer to these future interfaces collectively as Living Contracts. Using residential leases as a case study, we created three design probes representing different possible Living Contracts. A three-part qualitative study (N=18) revealed participants' barriers to interacting with contracts, including interpreting complex language, uncertainty about legal rights, and the pressure to sign quickly. Participants’ feedback on the probes highlighted how Living Contracts have the potential to address these challenges and open new design opportunities for human-contract interactions beyond document reading.

著者
Ziheng Huang
University of Illinois Urbana-Champaign, Urbana, Illinois, United States
Robin B. Kar
University of Illinois Urbana-Champaign, Urbana, Illinois, United States
Hari Sundaram
University of Illinois, Urbana, Illinois, United States
Tal August
University of Illinois Urbana-Champaign , Urbana, Illinois, United States
Collab: Fostering Critical Identification of Deepfake Videos on Social Media via Synergistic Annotation
要旨

Identifying deepfake videos on social media platforms is challenged by dynamic spatio-temporal artifacts and inadequate user tools. This hinders both critical viewing by users and scalable moderation on platforms. Here, we present Collab, a web plugin enabling users to collaboratively annotate deepfake videos. Collab integrates three key components: (i) an intuitive interface for spatio-temporal labeling where users provide confidence scores and rationales, facilitating detailed input even from non-experts, (ii) a novel confidence-weighted spatio-temporal Intersection-over-Union (IoU) algorithm to aggregate diverse user annotations into accurate aggregations, and (iii) a hierarchical demonstration strategy presenting aggregated results to guide attention toward contentious regions and foster critical evaluation. A seven-day online study (N=90), where participants annotated suspicious videos when viewing an online experimental platforms, compared Collab against two conditions without aggregation or demonstration respectively. Collab significantly improved identification accuracy and enhanced reflection compared to non-demonstration condition, while outperforming non-aggregation condition for its novelty and effectiveness.

著者
Shuning Zhang
Tsinghua University, Beijing, China
Linzhi Wang
Tsinghua University, 北京市, China
Shixuan Li
Tsinghua University, Beijing, China
Yuanyuan Wu
Shanghai Jiaotong university, Shanghai, China
Yuwei Chuai
University of Luxembourg, Esch-sur-Alzette, Luxembourg
Luoxi Chen
Tsinghua University, Beijing, China
Xin Yi
Tsinghua University, Beijing, China
Hewu Li
Tsinghua University, Beijing, China
When the Codec Hallucinates: User Perceptions of Miscompressed Images
要旨

People exchange images every day. New methods for image compression leverage neural networks to save bandwidth, but they can undermine the semantic integrity. The term miscompression refers to unintended semantic changes of image details, introduced by generative AI during neural (de)compression. Although prior work has speculated about the resulting risks, no empirical evidence exists on how people perceive these novel compression artifacts. In this study, 115 human subjects compared original images with conventionally compressed, neurally compressed, and miscompressed images. Participants perceive that miscompressions elevate the risk of misunderstandings when communicating with images. They also frequently attribute miscompressions to intentional editing, whereas conventional JPEG artifacts are more often recognized as distortions. This paper proposes a method to study this new phenomenon, provides the first empirical evidence of user perceptions of miscompressions, and derives implications for trust in images, as well as interface designs that mitigate the risk.

著者
Nora Hofer
University of Innsbruck, Innsbruck, Austria
Rainer Böhme
University of Innsbruck, Innsbruck, Austria
Designing Effective Digital Literacy Interventions for Boosting Deepfake Discernment
要旨

Deepfake images can erode trust in institutions and compromise election outcomes, as people often struggle to discern real images from deepfake images. Improving digital literacy can help address these challenges. Here, we compare the efficacy of five digital literacy interventions to boost people's ability to discern deepfakes: (1) textual guidance on common indicators of deepfakes; (2) visual demonstrations of these indicators; (3) a gamified exercise for identifying deepfakes; (4) implicit learning through repeated exposure and feedback; and (5) explanations of how deepfakes are generated with the help of AI. We conducted an experiment with N=1,200 participants from the United States to test the immediate and long-term effectiveness of our interventions. Our results show that our lightweight, easy-to-understand interventions can boost deepfake images discernment by up to 13 percentage points while maintaining trust in real images.

著者
Dominique Geissler
LMU Munich, Munich, Germany
Claire Robertson
Colby College, Waterville, Maine, United States
Stefan Feuerriegel
LMU Munich, Munich, Germany
Seeing, Hearing, and Knowing Together: Multimodal Strategies in Deepfake Videos Detection
要旨

As deepfake videos become increasingly difficult for people to recognise, understanding the strategies humans use is key to designing effective media literacy interventions. We conducted a study with 195 participants between the ages of 21 and 40, who judged real and deepfake videos, rated their confidence, and reported the cues they relied on across visual, audio, and knowledge strategies. Participants were more accurate with real videos than with deepfakes and showed lower expected calibration error for real content. Through association rule mining, we identified cue combinations that shaped performance. Visual appearance, vocal, and intuition often co-occurred for successful identifications, which highlights the importance of multimodal approaches in human detection. Our findings show which cues help or hinder detection and suggest directions for designing media literacy tools that guide effective cue use. Building on these insights can help people improve their identification skills and become more resilient to deceptive digital media.

著者
Chen Chen
Nanyang Technological University, Singapore, Singapore
Dion Goh
Nanyang Technological University, Singapore, Singapore, Singapore
Do Entropic Measurements of the Diversity of AI-generated Images Match Human Judgement?
要旨

This paper proposes that the ability to generate diverse outputs in response to a single prompt is necessary for text-to-image models to become more effective creativity support tools. It formalises the problem of measuring the diversity of generated text and images, with an emphasis on interactive, exploratory use in open-ended and creative tasks. It suggests, motivated by research in the psychology of creativity, that diversity should sit alongside image quality and fit-to-prompt as critical measures in this setting. The paper adapts several diversity measures from the literature to this task, then explores how they compare to human diversity ratings. These evaluations show that algorithmic measures of diversity can be a useful proxy for human ratings, with both declining in accuracy as the difficulty of the task increases. The paper concludes with an exploratory qualitative analysis of the factors involved in human diversity judgments to guide future research in this emerging area.

著者
Kazjon Grace
The University of Sydney, Sydney, Australia
Francisco J. Ibarrola
The University of Sydney, Sydney, Australia
Jody Watts
The University of Sydney, Sydney, Australia
Shu Takahashi
The University of Sydney, Sydney, Australia
Parth Bhargava
The University of Sydney, Sydney, Australia
Eduardo Velloso
The University of Sydney, Sydney, New South Wales, Australia
Conversational Inoculation to Enhance Resistance to Misinformation
要旨

Proliferation of misinformation is a globally acknowledged problem. Cognitive Inoculation helps build resistance to different forms of persuasion, such as misinformation. We investigate Conversational Inoculation, a method to help people build resistance to misinformation through dynamic conversations with a chatbot. We built a Web-based system to implement the method, and conducted a within-subject user experiment to compare it with two traditional inoculation methods. Our results validate Conversational Inoculation as a viable novel method, and show how it was able to enhance participants' resistance to misinformation.A qualitative analysis of the conversations between participants and the chatbot highlighted adaptability, independence, trust and friction as the main factors affecting Conversational Inoculation.We discuss the opportunities and challenges of using Conversational Inoculation to combat misinformation. Our work contributes a timely investigation and a promising research direction in scalable ways to combat misinformation.

受賞
Honorable Mention
著者
Dániel Szabó
University of Oulu, Oulu, Finland
Chi-Lan Yang
The University of Tokyo, Tokyo, Japan
Aku Visuri
University of Oulu, Oulu, Finland
Jonas Oppenlaender
University of Oulu, Oulu, Finland
Bharathi Sekar
University of Oulu, Oulu, Oulu, Finland
Koji Yatani
University of Tokyo, Tokyo, Japan
Simo Hosio
University of Oulu, Oulu, Oulu, Finland