FaceSight: Enabling Hand-to-Face Gesture Interaction on AR Glasses with a Downward-Facing Camera Vision

要旨

We present FaceSight, a computer vision-based hand-to-face gesture sensing technique for AR glasses. FaceSight fixes an infrared camera onto the bridge of AR glasses to provide extra sensing capability of the lower face and hand behaviors. We obtained 21 hand-to-face gestures and demonstrated the potential interaction benefits through five AR applications. We designed and implemented an algorithm pipeline that segments facial regions, detects hand-face contact (f1 score: 98.36%), and trains convolutional neural network (CNN) models to classify the hand-to-face gestures. The input features include gesture recognition, nose deformation estimation, and continuous fingertip movement. Our algorithm achieves classification accuracy of all gestures at 83.06%, proved by the data of 10 users. Due to the compact form factor and rich gestures, we recognize FaceSight as a practical solution to augment input capability of AR glasses in the future.

著者
Yueting Weng
Tsinghua University, Beijing, China
Chun Yu
Tsinghua University, Beijing, China
Yingtian Shi
Tsinghua University, Beijing, China
Yuhang Zhao
University of Wisconsin-Madison, Madison, Wisconsin, United States
Yukang Yan
Tsinghua University, Beijing, China
Yuanchun Shi
Tsinghua University, Beijing, China
DOI

10.1145/3411764.3445484

論文URL

https://doi.org/10.1145/3411764.3445484

動画

会議: CHI 2021

The ACM CHI Conference on Human Factors in Computing Systems (https://chi2021.acm.org/)

セッション: Vision and Sensing

[A] Paper Room 10, 2021-05-12 17:00:00~2021-05-12 19:00:00 / [C] Paper Room 10, 2021-05-13 09:00:00~2021-05-13 11:00:00 / [B] Paper Room 10, 2021-05-13 01:00:00~2021-05-13 03:00:00
Paper Room 10
13 件の発表
2021-05-12 17:00:00
2021-05-12 19:00:00
日本語まとめ
読み込み中…