Smart glasses hold immense potential, but existing input methods often hinder their seamless integration into everyday life. Touchpads integrated into the smart glasses suffer from limited input space and precision; voice commands raise privacy concerns and are contextually constrained; vision-based or IMU-based gesture recognition faces challenges in computational cost or privacy concerns. We present FingerGlass, an interaction technique for smart glasses that leverages side-mounted fingerprint sensors to capture fingerprint images. With a combined CNN and LSTM network, FingerGlass identifies finger identity and recognizes four types of gestures (nine in total): sliding, rolling, rotating, and tapping. These gestures, coupled with finger identification, are mapped to common smart glasses commands, enabling comprehensive and fluid text entry and application control. A user study reveals that FingerGlass represents a promising step towards a fresh, discreet, ergonomic, and efficient input interaction with smart glasses, potentially contributing to their wider adoption and integration into daily life.
https://dl.acm.org/doi/10.1145/3706598.3713929
The ACM CHI Conference on Human Factors in Computing Systems (https://chi2025.acm.org/)