Improving Finger Stroke Recognition Rate for Eyes-Free Mid-Air Typing in VR

要旨

We examine mid-air typing data collected from touch typists to evaluate the features and classification models for recognizing finger stroke. A large number of finger movement traces have been collected using finger motion capture systems, labeled into individual finger strokes, and classified into several key features. We test finger kinematic features, including 3D position, velocity, acceleration, and temporal features, including previous fingers and keys. Based on this analysis, we assess the performance of various classifiers, including Naive Bayes, Random Forest, Support Vector Machines, and Deep Neural Networks, in terms of the accuracy for correctly classifying the keystroke. We finally incorporate a linguistic heuristic to explore the effectiveness of the character prediction model and improve the total accuracy.

著者
Yatharth Singhal
University of Texas at Dallas, Dallas, Texas, United States
Richard Huynh. Noeske
The University of Texas at Dallas, Richardson, Texas, United States
Ayush Bhardwaj
The University of Texas at Dallas, Richardson, Texas, United States
Jin Ryong Kim
The University of Texas at Dallas, Richardson, Texas, United States
論文URL

https://dl.acm.org/doi/abs/10.1145/3491102.3502100

動画

会議: CHI 2022

The ACM CHI Conference on Human Factors in Computing Systems (https://chi2022.acm.org/)

セッション: Input Techniques

292
5 件の発表
2022-05-03 20:00:00
2022-05-03 21:15:00