この勉強会は終了しました。ご参加ありがとうございました。
Suggesting multiple target candidates based on touch input is a possible option for high-accuracy target selection on small touchscreen devices. But it can become overwhelming if suggestions are triggered too often. To address this, we propose SATS, a Suggestion-based Accurate Target Selection method, where target selection is formulated as a sequential decision problem. The objective is to maximize the utility: the negative time cost for the entire target selection procedure. The SATS decision process is dictated by a policy generated using reinforcement learning. It automatically decides when to provide suggestions and when to directly select the target. Our user studies show that SATS reduced error rate and selection time over Shift~\cite{vogel2007shift}, a magnification-based method, and MUCS, a suggestion-based alternative that optimizes the utility for the current selection. SATS also significantly reduced error rate over BayesianCommand~\cite{zhu2020using}, which directly selects targets based on posteriors, with only a minor increase in selection time.
Some individuals with motor impairments communicate using a single switch --- such as a button click, air puff, or blink. Row-column scanning provides a method for choosing items arranged in a grid using a single switch. An alternative, Nomon, allows potential selections to be arranged arbitrarily rather than requiring a grid (as desired for gaming, drawing, etc.) --- and provides an alternative probabilistic selection method. While past results suggest that Nomon may be faster and easier to use than row-column scanning, no work has yet quantified performance of the two methods over longer time periods or in tasks beyond writing. In this paper, we also develop and validate a webcam-based switch that allows a user without a motor impairment to approximate the response times of a motor-impaired single switch user; although the approximation is not a replacement for testing with single-switch users, it allows us to better initialize, calibrate, and evaluate our method. Over 10 sessions with the webcam switch, we found users typed faster and more easily with Nomon than with row-column scanning. The benefits of Nomon were even more pronounced in a picture-selection task. Evaluation and feedback from a motor-impaired switch user further supports the promise of Nomon.
We present a framework for gesture customization requiring minimal examples from users, all without degrading the performance of existing gesture sets. To achieve this, we first deployed a large-scale study (N=500+) to collect data and train an accelerometer-gyroscope recognition model with a cross-user accuracy of 95.7% and a false-positive rate of 0.6 per hour when tested on everyday non-gesture data. Next, we design a few-shot learning framework which derives a lightweight model from our pre-trained model, enabling knowledge transfer without performance degradation. We validate our approach through a user study (N=20) examining on-device customization from 12 new gestures, resulting in an average accuracy of 55.3%, 83.1%, and 87.2% on using one, three, or five shots when adding a new gesture, while maintaining the same recognition accuracy and false-positive rate from the pre-existing gesture set. We further evaluate the usability of our real-time implementation with a user experience study (N=20). Our results highlight the effectiveness, learnability, and usability of our customization framework. Our approach paves the way for a future where users are no longer bound to pre-existing gestures, freeing them to creatively introduce new gestures tailored to their preferences and abilities.
Despite the advent of touchscreens, typing on physical keyboards remains most efficient for entering text, because users can leverage all fingers across a full-size keyboard for convenient typing. As users increasingly type on the go, text input on mobile and wearable devices has had to compromise on full-size typing. In this paper, we present TapType, a mobile text entry system for full-size typing on passive surfaces—without an actual keyboard. From the inertial sensors inside a band on either wrist, TapType decodes and relates surface taps to a traditional QWERTY keyboard layout. The key novelty of our method is to predict the most likely character sequences by fusing the finger probabilities from our Bayesian neural network classifier with the characters' prior probabilities from an n-gram language model. In our online evaluation, participants on average typed 19 words per minute with a character error rate of 0.6 % after 30 minutes of training. Expert typists thereby consistently achieved more than 25 WPM at a similar error rate. We demonstrate applications of TapType in mobile use around smartphones and tablets, as a complement to interaction in situated Mixed Reality outside visual control, and as an eyes-free mobile text input method using an audio feedback-only interface.
Gesture elicitation studies are commonly used for designing novel gesture-based interfaces. There is a rich methodology literature on metrics and analysis methods that helps researchers understand and characterize data arising from such studies. However, deriving concrete gesture vocabularies from this data, which is often the ultimate goal, remains largely based on heuristics and ad hoc methods. In this paper, we treat the problem of deriving a gesture vocabulary from gesture elicitation data as a computational optimization problem. We show how to formalize it as an optimal assignment problem and discuss how to express objective functions and custom design constraints through integer programs. In addition, we introduce a set of tools for assessing the uncertainty of optimization outcomes due to random sampling, and for supporting researchers’ decisions on when to stop collecting data from a gesture elicitation study. We evaluate our methods on a large number of simulated studies.