Understanding and Empowering Intelligence Analysts: User-Centered Design for Deepfake Detection Tools

要旨

Intelligence analysts must quickly and accurately examine and report on information in multiple modalities, including video, audio, and images. With the rise of Generative AI and deepfakes, analysts face unprecedented challenges, and require effective, reliable, and explainable media detection and analysis tools. This work explores analysts' requirements for deepfake detection tools and explainability features. From a study of 30 practitioners from the United States Intelligence Community, we identified the need for a comprehensive and explainable solution that incorporates a wide variety of methods and supports the production of intelligence reports. In response, we propose a design for an analyst-centered tool, and introduce a digital media forensics ontology to support analysts’ interactions with the tool and understanding of its results. We conducted a study grounded in work-related tasks as an initial evaluation of this approach, and report on its potential to assist analysts and areas for improvement in future work.

著者
Y. Kelly Wu
Rochester Institute of Technology, Rochester, New York, United States
Saniat Sohrawardi
Rochester Institute of Technology, Rochester, New York, United States
Candice Rockell. Gerstner
National Security Agency, Fort George G. Meade, Maryland, United States
Matthew Wright
Rochester Institute of Technology, Rochester, New York, United States
DOI

10.1145/3706598.3713711

論文URL

https://dl.acm.org/doi/10.1145/3706598.3713711

動画

会議: CHI 2025

The ACM CHI Conference on Human Factors in Computing Systems (https://chi2025.acm.org/)

セッション: Privacy and Security

G418+G419
7 件の発表
2025-04-28 20:10:00
2025-04-28 21:40:00
日本語まとめ
読み込み中…