Text Entry Techniques

会議の名前
CHI 2024
PonDeFlick: A Japanese Text Entry on Smartwatch Commonalizing Flick Operation with Smartphone Interface
要旨

While the QWERTY keyboard is a standard text entry for Latin script languages on smart devices, it is not always true for non-Latin script languages. In Japanese, the most popular text entry on smartphones is a flick-based interface that systematically assigns more than fifty kana characters to twelve keys of a numeric keypad in combination with flick directions. Under these circumstances, studies on Japanese text entry on smartwatches have focused on an efficient interface design that takes advantage of the regularity of the kana consonant and vowel structure, but overlooked commonality with familiar interfaces. Thus, we propose PonDeFlick, a Japanese text entry that commonalizes the flick directions with the familiar smartphone interface while providing the entire touchscreen for gestural operation. A ten-day user study showed that PonDeFlick reached a text-entry speed of 57.7 characters per minute, significantly faster than the numeric-keypad-based interface and a modification of PonDeFlick without the commonality.

著者
Kai Akamine
Doshisha University, Kyotanabe, Kyoto, Japan
Ryotaro Tsuchida
Doshisha University, Kyotanabe, Kyoto, Japan
Tsuneo Kato
Doshisha University, Kyotanabe, Japan
Akihiro Tamura
Doshisha University, Kyotanabe, Japan
論文URL

https://doi.org/10.1145/3613904.3642569

動画
ARTiST: Automated Text Simplification for Task Guidance in Augmented Reality
要旨

Text presented in augmented reality provides in-situ, real-time information for users. However, this content can be challenging to apprehend quickly when engaging in cognitively demanding AR tasks, especially when it is presented on a head-mounted display. We propose ARTiST, an automatic text simplification system that uses a few-shot prompt and GPT-3 models to specifically optimize the text length and semantic content for augmented reality. Developed out of a formative study that included seven users and three experts, our system combines a customized error calibration model with a few-shot prompt to integrate the syntactic, lexical, elaborative, and content simplification techniques, and generate simplified AR text for head-worn displays. Results from a 16-user empirical study showed that ARTiST lightens the cognitive load and improves performance significantly over both unmodified text and text modified via traditional methods. Our work constitutes a step towards automating the optimization of batch text data for readability and performance in augmented reality.

著者
Guande Wu
New York University, New York CIty, New York, United States
Jing Qian
New York University, New York, New York, United States
Sonia Castelo Quispe
New York University, New York, New York, United States
Shaoyu Chen
New York University, New York, New York, United States
João Rulff
New York University, New York, New York, United States
Claudio Silva
New York University, New York City, New York, United States
論文URL

https://doi.org/10.1145/3613904.3642772

動画
Exploration of Foot-based Text Entry Techniques for Virtual Reality Environments
要旨

Foot-based input can serve as a supplementary or alternative approach to text entry in virtual reality (VR). This work explores the feasibility and design of foot-based techniques that are hands-free. We first conducted a preliminary study to assess foot-based text entry in standing and seated positions with tap and swipe input approaches. The findings showed that foot-based text input was feasible, with the possibility for performance and usability improvements. We then developed three foot-based techniques, including two tap-based techniques (FeetSymTap and FeetAsymTap) and one swipe-based technique (FeetGestureTap), and evaluated their performance via another user study. The results show that the two tap-based techniques supported entry rates of 11.12 WPM and 10.80 WPM, while the swipe-based technique led to 9.16 WPM. Our findings provide a solid foundation for the future design and implementation of foot-based text entry in VR and have the potential to be extended to MR and AR.

著者
Tingjie Wan
Xi'an Jiaotong-Liverpool University, Suzhou, Jiangsu, China
Liangyuting Zhang
Xi'an Jiaotong-Liverpool University, Suzhou, China
Hongyu Yang
Xi'an Jiaotong-Liverpool University, Suzhou, China
Pourang Irani
University of British Columbia (Okanagan), Kelowna, British Columbia, Canada
Lingyun Yu
Xi'an Jiaotong-Liverpool University, Suzhou, Jiangsu, China
Hai-Ning Liang
Xi'an Jiaotong-Liverpool University, Suzhou, Jiangsu, China
論文URL

https://doi.org/10.1145/3613904.3642757

動画
A Tool for Capturing Smartphone Screen Text
要旨

Context sensing on smartphones is often used to understand user behaviour. Amongst the many available sensors, the collection of text is crucial due to its richness. However, previous work has been limited to collecting text only from keyboard input, or intermittently collecting screen text indirectly by taking screenshots and applying optical character recognition. Here, we present a novel software sensor that unobtrusively and continuously captures all screen text on smartphones. We conducted a validation study with 21 participants over a two-week period, where they used our software on their personal smartphones. Our findings demonstrate how data from our sensor can be used to understand user behaviour and categorise mobile apps. We also show how smartphone sensing can be enhanced by using our sensor in conjunction with other sensors. We discuss the strengths and limitations of our sensor, highlighting potential areas for improvement and providing recommendations for its use.

著者
Songyan Teng
The University of Melbourne, Melbourne, Victoria, Australia
Simon D'Alfonso
The University of Melbourne, Parkville, Victoria, Australia
Vassilis Kostakos
University of Melbourne, Melbourne, Victoria, Australia
論文URL

https://doi.org/10.1145/3613904.3642347

動画