Deepfake—manipulating individuals' facial features and voices with AI—has introduced new challenges to online scams, with older adults being particularly vulnerable. However, existing safeguarding efforts often portray them as passive recipients, overlooking their perspectives on understanding deepfake-enabled scams and their expectations for protective interventions. To address this gap, we conducted a participatory design workshop with 10 older adults, where participants analyzed simulated deepfake scam videos and critiqued provocative safeguarding designs. Their insights revealed key factors contributing to their vulnerability and how they perceive protective measures. The findings underscored the importance of respecting older adults' autonomy and their role in decision-making, as well as the crucial role of enhanced digital literacy in self-protection. Moreover, while tailored safeguarding measures are essential, a broader societal approach focusing on shared responsibility is also needed. These design implications, viewed through the lens of older adults, contribute to more tailored safeguarding against Deepfake scams.
https://dl.acm.org/doi/10.1145/3706598.3714423
The ACM CHI Conference on Human Factors in Computing Systems (https://chi2025.acm.org/)