TouchType-GAN: Modeling Touch Typing with Generative Adversarial Network

要旨

Models that can generate touch typing tasks are important to the development of touch typing keyboards. We propose TouchType- GAN, a Conditional Generative Adversarial Network that can sim- ulate locations and time stamps of touch points in touch typing. TouchType-GAN takes arbitrary text as input to generate realistic touch typing both spatially (i.e., (𝑥, 𝑦) coordinates of touch points) and temporally (i.e., timestamps of touch points). TouchType-GAN in- troduces a variational generator that estimates Gaussian Distribu- tions for every target letter to prevent mode collapse. Our experi- ments on a dataset with 3k typed sentences show that TouchType- GAN outperforms existing touch typing models, including the Ro- tational Dual Gaussian model for simulating the distribution of touch points, and the Finger-Fitts Euclidean Model for sim- ulating typing time. Overall, our research demonstrates that the proposed GAN structure can learn the distribution of user typed touch points, and the resulting TouchType-GAN can also estimate typing movements. TouchType-GAN can serve as a valuable tool for designing and evaluating touch typing input systems.

著者
Jeremy Chu
Stony Brook University, Stony Brook, New York, United States
Yan Ma
Stony Brook University, Stony Brook, New York, United States
Shumin Zhai
Google, Mountain View, California, United States
Xianfeng David Gu
Stony Brook University, Stony Brook, New York, United States
Xiaojun Bi
Stony Brook University, Stony Brook, New York, United States
論文URL

https://doi.org/10.1145/3586183.3606760

動画

会議: UIST 2023

ACM Symposium on User Interface Software and Technology

セッション: Digital Dexterity: Touching and Typing Techniques

Gold Room
6 件の発表
2023-10-31 01:10:00
2023-10-31 02:30:00