MAF: Exploring Mobile Acoustic Field for Hand-to-Face Gesture Interactions

要旨

We present MAF, a novel acoustic sensing approach that leverages the commodity hardware in bone conduction earphones for hand-to-face gesture interactions. Briefly, by shining audio signals with bone conduction earphones, we observe that these signals not only propagate along the surface of the human face but also dissipate into the air, creating an acoustic field that envelops the individual’s head. We conduct benchmark studies to understand how various hand-to-face gestures and human factors influence this acoustic field. Building on the insights gained from these initial studies, we then propose a deep neural network combined with signal preprocessing techniques. This combination empowers MAF to effectively detect, segment, and subsequently recognize a variety of hand-to-face gestures, whether in close contact with the face or above it. Our comprehensive evaluation based on 22 participants demonstrates that MAF achieves an average gesture recognition accuracy of 92% across ten different gestures tailored to users' preferences.

著者
Yongjie Yang
University of Pittsburgh, Pittsburgh, Pennsylvania, United States
Tao Chen
University of Pittsburgh, Pittsburgh, Pennsylvania, United States
Yujing Huang
University of Pittsburgh, Pittsburgh, Pennsylvania, United States
Xiuzhen Guo
Zhejiang University, Hangzhou, China
Longfei Shangguan
University of Pittsburgh, Pittsburgh, Pennsylvania, United States
論文URL

https://doi.org/10.1145/3613904.3642437

動画

会議: CHI 2024

The ACM CHI Conference on Human Factors in Computing Systems (https://chi2024.acm.org/)

セッション: Interaction and Perception in Immersive Environments

317
5 件の発表
2024-05-13 23:00:00
2024-05-14 00:20:00