2. Future of Typing

会議の名前
UIST 2024
OptiBasePen: Mobile Base+Pen Input on Passive Surfaces by Sensing Relative Base Motion Plus Close-Range Pen Position
要旨

Digital pen input devices based on absolute pen position sensing, such as Wacom Pens, support high-fidelity pen input. However, they require specialized sensing surfaces like drawing tablets, which can have a large desk footprint, constrain the possible input area, and limit mobility. In contrast, digital pens with integrated relative sensing enable mobile use on passive surfaces, but suffer from motion artifacts or require surface contact at all times, deviating from natural pen affordances. We present OptiBasePen, a device for mobile pen input on ordinary surfaces. Our prototype consists of two parts: the "base" on which the hand rests and the pen for fine-grained input. The base features a high-precision mouse sensor to sense its own relative motion, and two infrared image sensors to track the absolute pen tip position within the base's frame of reference. This enables pen input on ordinary surfaces without external cameras while also avoiding drift from pen micro-movements. In this work, we present our prototype as well as the general base+pen concept, which combines relative and absolute sensing.

著者
Andreas Rene. Fender
University of Stuttgart, Stuttgart, Germany
Mohamed Kari
University of Duisburg-Essen, Essen, Germany
論文URL

https://doi.org/10.1145/3654777.3676467

動画
Palmrest+: Expanding Laptop Input Space with Shear Force on Palm-Resting Area
要旨

The palmrest area of laptops has the potential as an additional input space, considering its consistent palm contact during keyboard interaction. We propose Palmrest+, leveraging shear force exerted on the palmrest area. We suggest two input techniques: Palmrest Shortcut, for instant shortcut execution, and Palmrest Joystick, for continuous value input. These allow seamless and subtle input amidst keyboard typing. Evaluation of Palmrest Shortcut against conventional keyboard shortcuts revealed faster performance for applying shear force in unimanual and bimanual-manner with a significant reduction in gaze shifting. Additionally, the assessment of Palmrest Joystick against the laptop touchpad demonstrated comparable performance in selecting one- and two- dimensional targets with low-precision pointing, i.e., for short distances and large target sizes. The maximal hand displacement significantly decreased for both Palmrest Shortcut and Palmrest Joystick compared to conventional methods. These findings verify the feasibility and effectiveness of leveraging the palmrest area as an additional input space on laptops, offering promising enhanced typing-related user interaction experiences.

著者
Jisu Yim
KAIST, Daejeon, Korea, Republic of
Seoyeon Bae
KAIST, Daejeon, Korea, Republic of
Taejun Kim
School of Computing, KAIST, Daejeon, Korea, Republic of
Sunbum Kim
School of Computing, KAIST, Daejeon, Korea, Republic of
Geehyuk Lee
School of Computing, KAIST, Daejeon, Korea, Republic of
論文URL

https://doi.org/10.1145/3654777.3676371

動画
TouchInsight: Uncertainty-aware Rapid Touch and Text Input for Mixed Reality from Egocentric Vision
要旨

While passive surfaces offer numerous benefits for interaction in mixed reality, reliably detecting touch input solely from head-mounted cameras has been a long-standing challenge. Camera specifics, hand self-occlusion, and rapid movements of both head and fingers introduce considerable uncertainty about the exact location of touch events. Existing methods have thus not been capable of achieving the performance needed for robust interaction. In this paper, we present a real-time pipeline that detects touch input from all ten fingers on any physical surface, purely based on egocentric hand tracking. Our method TouchInsight comprises a neural network to predict the moment of a touch event, the finger making contact, and the touch location. TouchInsight represents locations through a bivariate Gaussian distribution to account for uncertainties due to sensing inaccuracies, which we resolve through contextual priors to accurately infer intended user input. We first evaluated our method offline and found that it locates input events with a mean error of 6.3 mm, and accurately detects touch events (F1=0.99) and identifies the finger used (F1=0.96). In an online evaluation, we then demonstrate the effectiveness of our approach for a core application of dexterous touch input: two-handed text entry. In our study, participants typed 37.0 words per minute with an uncorrected error rate of 2.9% on average.

著者
Paul Streli
ETH Zürich, Zürich, Switzerland
Mark Richardson
Facebook Reality Labs, Redmond, Washington, United States
Fadi Botros
Reality Labs Research, Redmond, Washington, United States
Shugao Ma
Facebook, Pittsburgh, Pennsylvania, United States
Robert Wang
Meta Reality Labs, Redmond, Washington, United States
Christian Holz
ETH Zürich, Zurich, Switzerland
論文URL

https://doi.org/10.1145/3654777.3676330

動画
Can Capacitive Touch Images Enhance Mobile Keyboard Decoding?
要旨

Capacitive touch sensors capture the two-dimensional spatial profile (referred to as a touch heatmap) of a finger's contact with a mobile touchscreen. However, the research and design of touchscreen mobile keyboards -- one of the most speed and accuracy demanding touch interfaces -- has focused on the location of the touch centroid derived from the touch image heatmap as the input, discarding the rest of the raw spatial signals. In this paper, we investigate whether touch heatmaps can be leveraged to further improve the tap decoding accuracy for mobile touchscreen keyboards. Specifically, we developed and evaluated machine-learning models that interpret user taps by using the centroids and/or the heatmaps as their input and studied the contribution of the heatmaps to model performance. The results show that adding the heatmap into the input feature set led to 21.4% relative reduction of character error rates on average, compared to using the centroid alone. Furthermore, we conducted a live user study with the centroid-based and heatmap-based decoders built into Pixel 6 Pro devices and observed lower error rate, faster typing speed, and higher self-reported satisfaction score based on the heatmap-based decoder than the centroid-based decoder. These findings underline the promise of utilizing touch heatmaps for improving typing experience in mobile keyboards.

著者
Piyawat Lertvittayakumjorn
Google, Mountain View, California, United States
Shanqing Cai
Google, Mountain View, California, United States
Billy Dou
Google, Mountain View, California, United States
Cedric Ho
Google, Mountain View, California, United States
Shumin Zhai
Google, Mountain View, California, United States
論文URL

https://doi.org/10.1145/3654777.3676420

動画