We investigated how to incorporate implicit touch pressure, finger pressure applied to a touch surface during typing, to improve text entry performance via statistical decoding. We focused on one-handed touch-typing on indirect interface as an example scenario. We first collected typing data on a pressure-sensitive touchpad, and analyzed users' typing behavior such as touch point distribution, key-to-finger mappings, and pressure images. Our investigation revealed distinct pressure patterns for different keys. Based on the findings, we performed a series of simulations to iteratively optimize the statistical decoding algorithm. Our investigation led to a Markov-Bayesian decoder incorporating pressure image data into decoding. It improved the top-1 accuracy from 53% to 74% over a naive Bayesian decoder. We then implemented PalmBoard, a text entry method that implemented the Markov-Bayesian decoder and effectively supported one-handed touch-typing on indirect interfaces. A user study showed participants achieved an average speed of 32.8 WPM with 0.6% error rate. Expert typists could achieve 40.2 WPM with 30 minutes of practice. Overall, our investigation showed that incorporating implicit touch pressure is effective in improving text entry decoding.
Auto-correction is a standard feature of mobile text entry. While the performance of state-of-the-art auto-correct methods is usually relatively high, any errors that occur are cumbersome to repair, interrupt the flow of text entry, and challenge the user's agency over the process. In this paper, we describe a system that aims to automatically identify and repair auto-correction errors. This system comprises a multi-modal classifier for detecting auto-correction errors from brain activity, eye gaze, and context information, as well as a strategy to repair such errors by replacing the erroneous correction or suggesting alternatives. We integrated both parts in a generic Android component and thus present a research platform for studying self-repairing end-to-end systems. To demonstrate its feasibility, we performed a user study to evaluate the classification performance and usability of our approach.
Rapid Serial Visual Presentation (RSVP) has gained popularity as a method for presenting text on wearable devices with limited screen space. Nonetheless, it remains unclear how to calibrate RSVP display parameters, such as spatial alignments or presentation rates, to suit the reader's information processing ability at high presentation speeds. Existing methods rely on comprehension and subjective workload scores, which are influenced by the user's knowledge base and subjective perception. Here, we use electroencephalography (EEG) to directly determine how individual information processing varies with changes in RSVP display parameters. Eighteen participants read text excerpts with RSVP in a repeated-measures design that manipulated the Text Alignment and Presentation Speed of text representation. We evaluated how predictive EEG metrics were of gains in reading speed, subjective workload, and text comprehension. We found significant correlations between EEG and increasing Presentation Speeds and propose how EEG can be used for dynamic selection of RSVP parameters.
Writing technical documents frequently requires following constraints and consistently using domain-specific terms. We interviewed 12 legal professionals and found that they all use a standard word processor, but must rely on their memory to manage dependencies and maintain consistent vocabulary within their documents. We introduce Textlets, interactive objects that reify text selections into persistent items. We show how Textlets help manage consistency and constraints within the document, including selective search and replace, word count, and alternative wording. Eight participants tested a search-and-replace Textlet as a technology probe. All successfully interacted directly with the Textlet to perform advanced tasks; and most (6/8) spontaneously generated a novel replace-all-then-correct strategy. Participants suggested additional ideas, such as supporting collaborative editing over time by embedding a Textlet into the document to flag forbidden words. We argue that Textlets serve as a generative concept for creating powerful new tools for document editing.
https://doi.org/10.1145/3313831.3376804
Cross-device interaction with tablets is a popular topic in HCI research. Recent work has shown the benefits of including multiple devices into users' workflows while various interaction techniques allow transferring content across devices. However, users are only reluctantly using multiple devices in combination. At the same time, research on cross-device interaction struggles to find a frame of reference to compare techniques or systems. In this paper, we try to address these challenges by studying the interplay of interaction techniques, device utilization, and task-specific activities in a user study with 24 participants from different but complementary angles of evaluation using an abstract task, a sensemaking task, and three interaction techniques. We found that different interaction techniques have a lower influence than expected, that work behaviors and device utilization depend on the task at hand, and that participants value specific aspects of cross-device interaction.
https://doi.org/10.1145/3313831.3376540