FingerBar: A Mid-Air Touch Bar Interface for Earphones Using Finger-Generated Acoustics

要旨

Current touch-based interactions on earphones are limited by hygiene concerns and the small interaction surface. Recent works attempt to bypass these issues with mid-air gesture systems using active acoustic sensing. However, these signals may be audible and pose potential hearing risks. To address this, we propose FingerBar, a mid-air gesture recognition system for earphones that relies solely on microphones without active signal transmission. FingerBar leverages the distinctive friction sounds generated by finger gestures to achieve gesture recognition. We design a gesture filtering pipeline to maintain robustness against daily noise. An adversarial training strategy further enhances user-independent performance. From a set of 16 gestures, we identify the 7 most suitable for FingerBar based on user acceptability. Extensive evaluations demonstrate high accuracy and robustness. Furthermore, a user study confirms the practicality and acceptability of the system. Our findings highlight the promise of passive acoustic sensing as a user-friendly interaction modality for earphones.

著者
Yankai Zhao
Southern University of Science and Technology, Shenzhen, China
Wentao Xie
The Hong Kong University of Science and Technology, Hong Kong, China
Haorui Li
Southern University of Science and Technology, Shenzhen, China
Jiao LI
Southern University of Science and Technology, Shenzhen, China
Tao Sun
Southern University of Science and Technology, Shenzhen, China
Qian Zhang
The Hong Kong University of Science and Technology, Hong Kong, China
Jin Zhang
Southern University of Science and Technology, Shenzhen, Guangdong, China

会議: CHI 2026

ACM CHI Conference on Human Factors in Computing Systems

セッション: Hand Pose & Gestures

P1 - Room 127
7 件の発表
2026-04-13 20:15:00
2026-04-13 21:45:00