BlyncSync: Enabling Multimodal Smartwatch Gestures with Synchronous Touch and Blink

要旨

Input techniques have been drawing abiding attention along with the continual miniaturization of personal computers. In this paper, we present BlyncSync, a novel multi-modal gesture set that leverages the synchronicity of touch and blink events to augment the input vocabulary of smartwatches with a rapid gesture, while at the same time, offers a solution to the false activation problem of blink-based input. BlyncSync contributes the concept of a mutual delimiter, where two modalities are used to jointly delimit the intention of each other's input. A study shows that BlyncSync is 33% faster than using a baseline input delimiter (physical smartwatch button), with only 150ms in overhead cost compared to traditional touch events. Furthermore, our data indicates that the gesture can be tuned to elicit a true positive rate of 97% and a false positive rate of 1.68%.

キーワード
BlyncSync
Gaze UI
Smartwatch
Mutual Delimiter
Mobile HCI
Wearables
著者
Bryan Wang
University of Toronto, Toronto, ON, Canada
Tovi Grossman
University of Toronto, Toronto, ON, Canada
DOI

10.1145/3313831.3376132

論文URL

https://doi.org/10.1145/3313831.3376132

動画

会議: CHI 2020

The ACM CHI Conference on Human Factors in Computing Systems (https://chi2020.acm.org/)

セッション: Around the clock

Paper session
311 KAUA'I
5 件の発表
2020-04-28 01:00:00
2020-04-28 02:15:00
日本語まとめ
読み込み中…