Deepfake scams, which use AI-generated audio or video to impersonate individuals, pose an increasing cybersecurity threat to older adults. Existing educational approaches present threats through generic examples, leaving learners to perceive scams as something that happens to others rather than to themselves. To address this gap, we conducted a formative study with five digital educators to identify design requirements, then developed DeepAware, a self-referential simulation platform that embeds participants' own faces and voices into deepfake scam scenarios. By making learners the target of simulated threats rather than passive observers, DeepAware aims to collapse the psychological distance between abstract warnings and personal vulnerability. A mixed-methods evaluation with 21 older adults found improvements in deepfake knowledge, threat perception, and coping confidence, though responses varied by prior familiarity. This work demonstrates the potential of self-referential simulation for cybersecurity education and offers design implications for future cybersecurity interventions.
ACM CHI Conference on Human Factors in Computing Systems