SkullID: Through-Skull Sound Conduction based Authentication for Smartglasses

要旨

This paper investigates the use of through-skull sound conduction to authenticate smartglass users. We mount a surface transducer on the right mastoid process to play cue signals and capture skull-transformed audio responses through contact microphones on various skull locations. We use the resultant bio-acoustic information as classification features. In an initial single-session study (N=25), we achieved mean Equal Error Rates (EERs) of 5.68% and 7.95% with microphones on the brow and left mastoid process. Combining the two signals substantially improves performance (to 2.35% EER). A subsequent multi-session study (N=30) demonstrates EERs are maintained over three recalls and, additionally, shows robustness to donning variations and background noise (achieving 2.72% EER). In a follow-up usability study over one week, participants report high levels of usability (as expressed by SUS scores) and that only modest workload is required to authenticate. Finally, a security analysis demonstrates the system's robustness to spoofing and imitation attacks.

著者
Hyejin Shin
Samsung Research, Seoul, Korea, Republic of
Jun Ho Huh
Samsung Research, Seoul, Korea, Republic of
Bum Jun Kwon
Samsung Research, Seoul, Korea, Republic of
Iljoo Kim
Samsung Research, Seoul, Korea, Republic of
Eunyong Cheon
UNIST, Ulsan, Korea, Republic of
HongMin Kim
UNIST, Ulsan, Korea, Republic of
Choong-Hoon Lee
Samsung Research, Seoul, Korea, Republic of
Ian Oakley
UNIST, Ulsan, Korea, Republic of
論文URL

doi.org/10.1145/3613904.3642506

動画

会議: CHI 2024

The ACM CHI Conference on Human Factors in Computing Systems (https://chi2024.acm.org/)

セッション: Security Systems

317
5 件の発表
2024-05-15 18:00:00
2024-05-15 19:20:00