TapNet: The Design, Training, Implementation, and Applications of a Multi-Task Learning CNN for Off-Screen Mobile Input

要旨

To make off-screen interaction without specialized hardware practical, we investigate using deep learning methods to process the common built-in IMU sensor (accelerometers and gyroscopes) on mobile phones into a useful set of one-handed interaction events. We present the design, training, implementation and applications of TapNet, a multi-task network that detects tapping on the smartphone. With phone form factor as auxiliary information, TapNet can jointly learn from data across devices and simultaneously recognize multiple tap properties, including tap direction and tap location. We developed two datasets consisting of over 135K training samples, 38K testing samples, and 32 participants in total. Experimental evaluation demonstrated the effectiveness of the TapNet design and its significant improvement over the state of the art. Along with the datasets, codebase, and extensive experiments, TapNet establishes a new technical foundation for off-screen mobile input.

著者
Michael Xuelin Huang
Google, Mountain View, California, United States
Yang Li
Google Research, Mountain View, California, United States
Nazneen Nazneen
Google, Mountain View, California, United States
Alexander Chao
Google, Mountain View, California, United States
Shumin Zhai
Google, Mountain View, California, United States
DOI

10.1145/3411764.3445626

論文URL

https://doi.org/10.1145/3411764.3445626

動画

会議: CHI 2021

The ACM CHI Conference on Human Factors in Computing Systems (https://chi2021.acm.org/)

セッション: Computational Physical Interaction

[A] Paper Room 02, 2021-05-10 17:00:00~2021-05-10 19:00:00 / [B] Paper Room 02, 2021-05-11 01:00:00~2021-05-11 03:00:00 / [C] Paper Room 02, 2021-05-11 09:00:00~2021-05-11 11:00:00
Paper Room 02
12 件の発表
2021-05-10 17:00:00
2021-05-10 19:00:00
日本語まとめ
読み込み中…