Leveraging Biometric-Rich Hand Gestures for Head-Mounted Display Authentication

要旨

User authentication on head-mounted displays (HMDs) relies on passwords, which are cumbersome to input and susceptible to shoulder-surfing attacks. Recent research has revealed that behavioral signals collected during common HMD tasks are highly distinctive between users. Building on these findings, this paper presents a knowledge-driven behavioral authentication system for HMDs. Our system leverages user-defined gestures as cues and trains an anomaly detector on hand joint motion signals for each user. To evaluate effectiveness, we conducted a comprehensive multi-session user study (n=20) and an observation attack study (n=10). The results show that gestures secured with joint motions are resilient to worst-case scenario video-based observation attacks (AUC = 0.97, EER = 3.58%) and maintain high recall performance over one week (AUC = 0.93, EER = 9.82%). These findings suggest that user-generated biometric hand gestures offer a promising approach to securing HMDs.

受賞
Honorable Mention
著者
Amin Jalilov
KAIST, Daejeon, Korea, Republic of
Eunyong Cheon
KAIST, Daejeon, Korea, Republic of
Ian Oakley
KAIST, Daejeon, Korea, Republic of

会議: CHI 2026

ACM CHI Conference on Human Factors in Computing Systems

セッション: Safety, Identity & Relatedness

P1 - Room 112
7 件の発表
2026-04-14 20:15:00
2026-04-14 21:45:00