Intelligence analysts must quickly and accurately examine and report on information in multiple modalities, including video, audio, and images. With the rise of Generative AI and deepfakes, analysts face unprecedented challenges, and require effective, reliable, and explainable media detection and analysis tools. This work explores analysts' requirements for deepfake detection tools and explainability features. From a study of 30 practitioners from the United States Intelligence Community, we identified the need for a comprehensive and explainable solution that incorporates a wide variety of methods and supports the production of intelligence reports. In response, we propose a design for an analyst-centered tool, and introduce a digital media forensics ontology to support analysts’ interactions with the tool and understanding of its results. We conducted a study grounded in work-related tasks as an initial evaluation of this approach, and report on its potential to assist analysts and areas for improvement in future work.
https://dl.acm.org/doi/10.1145/3706598.3713711
The ACM CHI Conference on Human Factors in Computing Systems (https://chi2025.acm.org/)