この勉強会は終了しました。ご参加ありがとうございました。
Increased levels of interactivity and multi-sensory stimulation have been shown to enhance the immersion of Virtual Reality experiences. We present the AirRes mask that enables users to utilize their breathing for precise natural interactions with the virtual environment without suffering from limitations of the sensing equipment such as motion artifacts. Furthermore, the AirRes mask provides breathing resistance as novel output modality that can be adjusted in real-time by the application. In a user study, we demonstrate the mask's precision measurements for interaction as well as its ability to use breathing resistance to communicate contextual information such as adverse environmental conditions that affect the user’s virtual avatar. Our results show that the AirRes mask enhances virtual experiences and has the potential to create more immersive scenarios for applications by enforcing the perception of danger or improving situational awareness in training simulations, or for psychotherapy by providing additional physical stimuli.
Today’s consumer virtual reality (VR) systems offer limited haptic feedback via vibration motors in handheld controllers. Rendering haptics to other parts of the body is an open challenge, especially in a practical and consumer-friendly manner. The mouth is of particular interest, as it is a close second in tactile sensitivity to the fingertips, offering a unique opportunity to add fine-grained haptic effects. In this research, we developed a thin, compact, beamforming array of ultrasonic transducers, which can render haptic effects onto the mouth. Importantly, all components are integrated into the headset, meaning the user does not need to wear an additional accessory, or place any external infrastructure in their room. We explored several effects, including point impulses, swipes, and persistent vibrations. Our haptic sensations can be felt on the lips, teeth and tongue, which can be incorporated into new and interesting VR experiences.
Voice user interfaces (VUIs) have made their way into people's daily lives, from voice assistants to smart speakers. Although VUIs typically just react to direct user commands, increasingly, they incorporate elements of proactive behaviors. In particular, proactive smart speakers have the potential for many applications, ranging from healthcare to entertainment; however, their usability in everyday life is subject to interaction errors. To systematically investigate the nature of errors, we designed a voice-based Experience Sampling Method (ESM) application to run on proactive speakers. We captured 1,213 user interactions in a 3-week field deployment in 13 participants' homes. Through auxiliary audio recordings and logs, we identify substantial interaction errors and strategies that users apply to overcome those errors. We further analyze the interaction timings and provide insights into the time cost of errors. We find that, even for answering simple ESMs, interaction errors occur frequently and can hamper the usability of proactive speakers and user experience. Our work also identifies multiple facets of VUIs that can be improved in terms of the timing of speech.
We present Lattice Menu, a gaze-based marking menu utilizing a lattice of visual anchors that helps perform accurate gaze pointing for menu item selection. Users who know the location of the desired item can leverage target-assisted gaze gestures for multilevel item selection by looking at visual anchors over the gaze trajectories. Our evaluation showed that Lattice Menu exhibits a considerably low error rate (~1%) and a quick menu selection time (1.3-1.6 s) for expert usage across various menu structures (4 × 4 × 4 and 6 × 6 × 6) and sizes (8, 10 and 12°). In comparison with a traditional gaze-based marking menu that does not utilize visual targets, Lattice Menu showed remarkably (~5 times) fewer menu selection errors for expert usage. In a post-interview, all 12 subjects preferred Lattice Menu, and most subjects (8 out of 12) commented that the provisioning of visual targets facilitated more stable menu selections with reduced eye fatigue.