この勉強会は終了しました。ご参加ありがとうございました。
Messaging is a ubiquitous digital communication medium. It is also a minimal medium of communication because of its inability to convey immediate feedback, tone, facial expressions, hesitations, and pauses, or follow the train of the other person's thoughts. This paper combines quantitative and qualitative approaches for analyzing richer forms of typing indicators in messaging interfaces, such as showing text as it is typed. By assessing users’ subjective workload and interpreting these findings in the context of users’ experiences, we found that more expressive typing indicators were perceived as ``rich in communication'', as they helped people communicate more allowing for closer connections. These indicators also increased users' perceived co-presence. In addition, our research suggests there may be benefits of designing customized typing indicators for relationship maintenance and task-based communication.
Text entry on tablet touchscreens is a basic need nowadays. Tablet keyboards require visual attention for users to locate keys, thus not supporting efficient touch typing. They also take up a large proportion of screen space, which affects the access to information. To solve these problems, we propose ResType, an adaptive and invisible keyboard on three-state touch surfaces (e.g. tablets with unintentional touch prevention). ResType allows users to rest their hands on it and automatically adapts the keyboard to the resting fingers. Thus, users do not need visual attention to locate keys, which supports touch typing. We quantitatively explored users' resting finger patterns on ResType, based on which we proposed an augmented Bayesian decoding algorithm for ResType, with 96.3% top-1 and 99.0% top-3 accuracies. After a 5-day evaluation, ResType achieved 41.26 WPM, outperforming normal tablet keyboards by 13.5% and reaching 86.7% of physical keyboards. It solves the occlusion problem while maintaining comparable typing speed with current methods on visible tablet keyboards.
The emergent Optical Head-Mounted Display (OHMD) platform has made mobile reading possible by superimposing digital text onto users’ view of the environment. However, mobile reading through OHMD needs to be effectively balanced with the user's environmental awareness. Hence, a series of studies were conducted to explore how text spacing strategies facilitate such balance. Through these studies, it was found that increasing spacing within the text can significantly enhance mobile reading on OHMDs in both simple and complex navigation scenarios and that such benefits mainly come from increasing the inter-line spacing, but not inter-word spacing. Compared with existing positioning strategies, increasing inter-line spacing improves mobile OHMD information reading in terms of reading speed (11.9% faster), walking speed (3.7% faster), and switching between reading and navigation (106.8% more accurate and 33% faster).
Interactions with digital devices during social settings can reduce social engagement and interrupt conversations. To overcome these drawbacks, we designed ParaGlassMenu, a semi-transparent circular menu that can be displayed around a conversation partner's face on Optical See-Through Head-Mounted Display (OHMD) and interacted subtly using a ring mouse. We evaluated ParaGlassMenu with several alternative approaches (Smartphone, Voice assistant, and Linear OHMD menus) by manipulating Internet-of-Things (IoT) devices in a simulated conversation setting with a digital partner. Results indicated that the ParaGlassMenu offered the best overall performance in balancing social engagement and digital interaction needs in conversations. To validate these findings, we conducted a second study in a realistic conversation scenario involving commodity IoT devices. Results confirmed the utility and social acceptance of the ParaGlassMenu. Based on the results, we discuss implications for designing attention-maintaining subtle interaction techniques on OHMDs.
Three state virtual keyboards which differentiate contact events between released, touched, and pressed states have the potential to improve overall typing experience and reduce the gap between virtual keyboards and physical keyboards. Incorporating force sensitivity, three-state virtual keyboards can utilize a force threshold to better classify a contact event. However, our limited knowledge of how force plays a role during typing on virtual keyboards limits further progress. Through a series of studies we observe that using a uniform threshold is not an optimal approach. Furthermore, the force being applied while typing varies significantly across the keys and among participants. As such, we propose three different approaches to further improve the uniform threshold. We show that a carefully selected non-uniform threshold function could be sufficient in delineating typing events on a three-state keyboard. Finally, we conclude our work with lessons learned, suggestion for future improvements, and comparisons with current methods available.
Writing text with eye gaze only is an appealing hands-free text entry method. However, existing gaze-based text entry methods introduce eye fatigue and are slow in typing speed because they often require users to dwell on letters of a word, or mark the starting and ending positions of a gaze path with extra operations for entering a word. In this paper, we propose GlanceWriter, a text entry method that allows users to enter text by glancing over keys one by one without any need to dwell on any keys or specify the starting and ending positions of a gaze path when typing a word. To achieve so, GlanceWriter probabilistically determines the letters to be typed based on the dynamics of gaze movements and gaze locations. Our user studies demonstrate that GlanceWriter significantly improves the text entry performance over EyeSwipe, a dwell-free input method using ``reverse crossing'' to identify the starting and ending keys. GlanceWriter also outperforms the dwell-free gaze input method of Tobii's Communicator 5, a commercial eye gaze-based communication system. Overall, GlanceWriter achieves dwell-free and crossing-free text entry by probabilistically decoding gaze paths, offering a promising gaze-based text entry method.