Style Blink: Exploring Digital Inking of Structured Information via Handcrafted Styling as a First-Class Object
説明

Structured note-taking forms such as sketchnoting, self-tracking journals, and bullet journaling go beyond immediate capture of information scraps. Instead, hand-drawn pride-in-craftmanship increases perceived value for sharing and display. But hand-crafting lists, tables, and calendars is tedious and repetitive. To support these practices digitally, Style Blink (“Style-Blocks+Ink”) explores handcrafted styling as a first-class object. Style-blocks encapsulate digital ink, enabling people to craft, modify, and reuse embellishments and decorations for larger structures, and apply custom layouts. For example, we provide interaction instruments that style ink for personal expression, inking palettes that afford creative experimentation, fillable pens that can be “loaded” with commands and actions to replace menu selections, techniques to customize inked structures post-creation by modifying the underlying handcrafted style-blocks and to re-layout the overall structure to match users' preferred template. In effect, any ink stroke, notation, or sketch can be encapsulated as a style-object and re-purposed as a tool. Feedback from 13 users show the potential of style adaptation and re-use in individual sketching practices.

日本語まとめ
読み込み中…
読み込み中…
TapGazer: Text Entry with Finger Tapping and Gaze-directed Word Selection
説明

While using VR, efficient text entry is a challenge: users cannot easily locate standard physical keyboards, and keys are often out of reach, e.g.\ when standing.

We present TapGazer, a text entry system where users type by tapping their fingers in place. Users can tap anywhere as long as the identity of each tapping finger can be detected with sensors. Ambiguity between different possible input words is resolved by selecting target words with gaze. If gaze tracking is unavailable, ambiguity is resolved by selecting target words with additional taps.

We evaluated TapGazer for seated and standing VR: seated novice users using touchpads as tap surfaces reached 44.81 words per minute (WPM), 79.17% of their QWERTY typing speed. Standing novice users tapped on their thighs with touch-sensitive gloves, reaching 45.26 WPM (71.91%).

We analyze TapGazer with a theoretical performance model and discuss its potential for text input in future AR scenarios.

日本語まとめ
読み込み中…
読み込み中…
Passages: Interacting with Text Across Documents
説明

A key aspect of knowledge work is the analysis and manipulation of sets of related documents. We conducted interviews with 12 patent examiners and 12 scientists and found that all use specialized tools for managing text from multiple documents across various interconnected activities, including searching, collecting, annotating, organizing, writing and reviewing, while manually tracking their provenance. We introduce Passages, interactive objects that reify text selections that can then be manipulated, reused, and shared across multiple tools. Passages directly supports the above-listed activities as well as fluid transitions among them, e.g. through drag-and-drop across windows. Two user studies show that participants found Passages both elegant and powerful, facilitating their work practices and enabling greater reuse and novel strategies for analyzing and composing documents. We argue that Passages offers a general approach applicable to a wide variety of text-based interactions.

日本語まとめ
読み込み中…
読み込み中…
TypeAnywhere: A QWERTY-Based Text Entry Solution for Ubiquitous Computing
説明

We present a QWERTY-based text entry system, TypeAnywhere, for use in off-desktop computing environments. Using a wearable device that can detect finger taps, users can leverage their touch-typing skills from physical keyboards to perform text entry on any surface. TypeAnywhere decodes typing sequences based only on finger-tap sequences without relying on tap locations. To achieve optimal decoding performance, we trained a neural language model and achieved a 1.6% character error rate (CER) in an offline evaluation, compared to a 5.3% CER from a traditional n-gram language model. Our user study showed that participants achieved an average performance of 70.6 WPM, or 80.4% of their physical keyboard speed, and 1.50% CER after 2.5 hours of practice over five days on a table surface. They also achieved 43.9 WPM and 1.37% CER when typing on their laps. Our results demonstrate the strong potential of QWERTY typing as a ubiquitous text entry solution.

日本語まとめ
読み込み中…
読み込み中…