Text entry

Paper session

会議の名前
CHI 2020
EYEditor: Towards On-the-Go Heads-Up Text Editing Using Voice and Manual Input
要旨

On-the-go text-editing is difficult, yet frequently done in everyday lives. Using smartphones for editing text forces users into a heads-down posture which can be undesirable and unsafe. We present EYEditor, a heads-up smartglass-based solution that displays the text on a see-through peripheral display and allows text-editing with voice and manual input. The choices of output modality (visual and/or audio) and content presentation were made after a controlled experiment, which showed that sentence-by-sentence visual-only presentation is best for optimizing users' editing and path-navigation capabilities. A second experiment formally evaluated EYEditor against the standard smartphone-based solution for tasks with varied editing complexities and navigation difficulties. The results showed that EYEditor outperformed smartphones as either the path OR the task became more difficult. Yet, the advantage of EYEditor became less salient when both the editing and navigation was difficult. We discuss trade-offs and insights gained for future heads-up text-editing solutions.

キーワード
Heads-up Interaction
Smart glass
Text editing
Voice Interaction
EYEditor
Wearable Interaction
Mobile Interaction
Re-speaking
Manual-input
著者
Debjyoti Ghosh
National University of Singapore, Singapore, Singapore
Pin Sym Foong
National University of Singapore, Singapore, Singapore
Shengdong Zhao
National University of Singapore, Singapore, Singapore
Can Liu
City University of Hong Kong, Kowloon, China
Nuwan Janaka
National University of Singapore, Singapore, Singapore
Vinitha Erusu
National University of Singapore, Singapore, Singapore
DOI

10.1145/3313831.3376173

論文URL

https://doi.org/10.1145/3313831.3376173

動画
TAGSwipe: Touch Assisted Gaze Swipe for Text Entry
要旨

The conventional dwell-based methods for text entry by gaze are typically slow and uncomfortable. A swipe-based method that maps gaze path into words offers an alternative. However, it requires the user to explicitly indicate the beginning and ending of a word, which is typically achieved by tedious gaze-only selection. This paper introduces TAGSwipe, a bi-modal method that combines the simplicity of touch with the speed of gaze for swiping through a word. The result is an efficient and comfortable dwell-free text entry method. In the lab study TAGSwipe achieved an average text entry rate of 15.46 wpm and significantly outperformed conventional swipe-based and dwell-based methods in efficacy and user satisfaction.

キーワード
Eye typing
multimodal interaction
touch input
dwell-free typing
word-level text entry
swipe
eye tracking
著者
Chandan Kumar
University of Koblenz–Landau, Koblenz, Germany
Ramin Hedeshy
University of Koblenz–Landau, Koblenz, Germany
I. Scott MacKenzie
York University, Toronto, ON, Canada
Steffen Staab
University of Stuttgart & University of Southampton, Stuttgart, Germany
DOI

10.1145/3313831.3376317

論文URL

https://doi.org/10.1145/3313831.3376317

動画
BiTipText: Bimanual Eyes-Free Text Entry on a Fingertip Keyboard
要旨

We present a bimanual text input method on a miniature fingertip keyboard, that invisibly resides on the first segment of a user's index finger on both hands. Text entry can be carried out using the thumb-tip to tap the tip of the index finger. The design of our keyboard layout followed an iterative process, where we first conducted a study to understand the natural expectation of the handedness of the keys in a QWERTY layout for users. Among a choice of 67,108,864 design variations, we identified 1295 candidates offering a good satisfaction for user expectations. Based on these results, we computed an optimized bimanual keyboard layout, while considering the joint optimization problems of word ambiguity and movement time. Our user evaluation revealed that participants achieved an average text entry speed of 23.4 WPM.

キーワード
Micro finger gesture
text entry
wearable
bimanual input
著者
Zheer Xu
Dartmouth College, Hanover, NH, USA
Weihao Chen
Dartmouth College & Tsinghua University, Hanover, NH, USA
Dongyang Zhao
Dartmouth College & Fudan University, Hanover, NH, USA
Jiehui Luo
Dartmouth College, Hanover, NH, USA
Te-Yen Wu
Dartmouth College, Hanover, NH, USA
Jun Gong
Dartmouth College, Hanover, NH, USA
Sicheng Yin
Dartmouth College & Tsinghua University, Hanover, NH, USA
Jialun Zhai
Dartmouth College & Fudan University, Hanover, NH, USA
Xing-Dong Yang
Dartmouth College, Hanover, NH, USA
DOI

10.1145/3313831.3376306

論文URL

https://doi.org/10.1145/3313831.3376306

動画
Leveraging Error Correction in Voice-based Text Entry by Talk-and-Gaze
要旨

We present the design and evaluation of Talk-and-Gaze (TaG), a method for selecting and correcting errors with voice and gaze. TaG uses eye gaze to overcome the inability of voice-only systems to provide spatial information. The user's point of gaze is used to select an erroneous word either by dwelling on the word for 800 ms (D-TaG) or by uttering a "select" voice command (V-TaG). A user study with 12 participants compared D-TaG, V-TaG, and a voice-only method for selecting and correcting words. Corrections were performed more than 20% faster with D-TaG compared to the V-TaG or voice-only methods. As well, D-TaG was observed to require 24% less selection effort than V-TaG and 11% less selection effort than voice-only error correction. D-TaG was well received in a subjective assessment with 66% of users choosing it as their preferred choice for error correction in voice-based text entry.

キーワード
Text Entry
Voice
Eye Tracking
Multimodal
Usability
Interaction Design
著者
Korok Sengupta
University of Koblenz-Landau, Koblenz, Germany
Sabin Bhattarai
University of Koblenz-Landau, Koblenz, Germany
Sayan Sarcar
University of Tsukuba, Tsukuba, Ibaraki, Japan
I. Scott MacKenzie
York University, Toronto, ON, Canada
Steffen Staab
Universität Stuttgart & University of Southampton, Koblenz, Germany
DOI

10.1145/3313831.3376579

論文URL

https://doi.org/10.1145/3313831.3376579

Swap: A Replacement-based Text Revision Technique for Mobile Devices
要旨

Text revision is an important task to ensure the accuracy of text content. Revising text on mobile devices is cumbersome and time-consuming due to the imprecise caret control and the repetitive use of the backspace. We present Swap, a novel replacement-based technique to facilitate text revision on mobile devices. We conducted two user studies to validate the feasibility and the effectiveness of Swap compared to traditional text revision techniques. Results showed that Swap reduced efforts in caret control and repetitive backspace pressing during the text revision process. Most participants preferred to use the replacement-based technique rather than backspace and caret. They also commented that the new technique is easy to learn, and it makes text revision rapid and intuitive.

キーワード
Text Revision
Mobile Device
Virtual Keyboard
Backspace
Caret Control
著者
Yang Li
Kochi University of Technology, Kami, Kochi, Japan
Sayan Sarcar
University of Tsukuba, Tsukuba, Ibaraki, Japan
Sunjun Kim
Aalto University & Daegu Gyeongbuk Institute of Science and Technology, Espoo, Finland
Xiangshi Ren
Kochi University of Technology, Kami, Kochi, Japan
DOI

10.1145/3313831.3376217

論文URL

https://doi.org/10.1145/3313831.3376217

動画