この勉強会は終了しました。ご参加ありがとうございました。
While the QWERTY keyboard is a standard text entry for Latin script languages on smart devices, it is not always true for non-Latin script languages. In Japanese, the most popular text entry on smartphones is a flick-based interface that systematically assigns more than fifty kana characters to twelve keys of a numeric keypad in combination with flick directions. Under these circumstances, studies on Japanese text entry on smartwatches have focused on an efficient interface design that takes advantage of the regularity of the kana consonant and vowel structure, but overlooked commonality with familiar interfaces. Thus, we propose PonDeFlick, a Japanese text entry that commonalizes the flick directions with the familiar smartphone interface while providing the entire touchscreen for gestural operation. A ten-day user study showed that PonDeFlick reached a text-entry speed of 57.7 characters per minute, significantly faster than the numeric-keypad-based interface and a modification of PonDeFlick without the commonality.
Text presented in augmented reality provides in-situ, real-time information for users. However, this content can be challenging to apprehend quickly when engaging in cognitively demanding AR tasks, especially when it is presented on a head-mounted display. We propose ARTiST, an automatic text simplification system that uses a few-shot prompt and GPT-3 models to specifically optimize the text length and semantic content for augmented reality. Developed out of a formative study that included seven users and three experts, our system combines a customized error calibration model with a few-shot prompt to integrate the syntactic, lexical, elaborative, and content simplification techniques, and generate simplified AR text for head-worn displays. Results from a 16-user empirical study showed that ARTiST lightens the cognitive load and improves performance significantly over both unmodified text and text modified via traditional methods. Our work constitutes a step towards automating the optimization of batch text data for readability and performance in augmented reality.
Foot-based input can serve as a supplementary or alternative approach to text entry in virtual reality (VR). This work explores the feasibility and design of foot-based techniques that are hands-free. We first conducted a preliminary study to assess foot-based text entry in standing and seated positions with tap and swipe input approaches. The findings showed that foot-based text input was feasible, with the possibility for performance and usability improvements. We then developed three foot-based techniques, including two tap-based techniques (FeetSymTap and FeetAsymTap) and one swipe-based technique (FeetGestureTap), and evaluated their performance via another user study. The results show that the two tap-based techniques supported entry rates of 11.12 WPM and 10.80 WPM, while the swipe-based technique led to 9.16 WPM. Our findings provide a solid foundation for the future design and implementation of foot-based text entry in VR and have the potential to be extended to MR and AR.
Context sensing on smartphones is often used to understand user behaviour. Amongst the many available sensors, the collection of text is crucial due to its richness. However, previous work has been limited to collecting text only from keyboard input, or intermittently collecting screen text indirectly by taking screenshots and applying optical character recognition. Here, we present a novel software sensor that unobtrusively and continuously captures all screen text on smartphones. We conducted a validation study with 21 participants over a two-week period, where they used our software on their personal smartphones. Our findings demonstrate how data from our sensor can be used to understand user behaviour and categorise mobile apps. We also show how smartphone sensing can be enhanced by using our sensor in conjunction with other sensors. We discuss the strengths and limitations of our sensor, highlighting potential areas for improvement and providing recommendations for its use.