Hear Us, then Protect Us: Navigating Deepfake Scams and Safeguard Interventions with Older Adults through Participatory Design

要旨

Deepfake—manipulating individuals' facial features and voices with AI—has introduced new challenges to online scams, with older adults being particularly vulnerable. However, existing safeguarding efforts often portray them as passive recipients, overlooking their perspectives on understanding deepfake-enabled scams and their expectations for protective interventions. To address this gap, we conducted a participatory design workshop with 10 older adults, where participants analyzed simulated deepfake scam videos and critiqued provocative safeguarding designs. Their insights revealed key factors contributing to their vulnerability and how they perceive protective measures. The findings underscored the importance of respecting older adults' autonomy and their role in decision-making, as well as the crucial role of enhanced digital literacy in self-protection. Moreover, while tailored safeguarding measures are essential, a broader societal approach focusing on shared responsibility is also needed. These design implications, viewed through the lens of older adults, contribute to more tailored safeguarding against Deepfake scams.

著者
Yuxiang Zhai
Tsinghua University, Beijing, China
Xiao XUE
Tsinghua University, Beijing, China
Zekai Guo
Tsinghua University, Beijing, China
Tongtong Jin
Tsinghua University, Beijing, China
Yuting Diao
Tsinghua University, Beijing, China
Jihong Jeung
Tsinghua University, Beijing, China
DOI

10.1145/3706598.3714423

論文URL

https://dl.acm.org/doi/10.1145/3706598.3714423

動画

会議: CHI 2025

The ACM CHI Conference on Human Factors in Computing Systems (https://chi2025.acm.org/)

セッション: High-Stake Situations

G302
7 件の発表
2025-04-28 23:10:00
2025-04-29 00:40:00
日本語まとめ
読み込み中…