Mouth-based Interaction

会議の名前
CHI 2022
AirRes Mask: A Precise and Robust Virtual Reality Breathing Interface Utilizing Breathing Resistance as Output Modality
要旨

Increased levels of interactivity and multi-sensory stimulation have been shown to enhance the immersion of Virtual Reality experiences. We present the AirRes mask that enables users to utilize their breathing for precise natural interactions with the virtual environment without suffering from limitations of the sensing equipment such as motion artifacts. Furthermore, the AirRes mask provides breathing resistance as novel output modality that can be adjusted in real-time by the application. In a user study, we demonstrate the mask's precision measurements for interaction as well as its ability to use breathing resistance to communicate contextual information such as adverse environmental conditions that affect the user’s virtual avatar. Our results show that the AirRes mask enhances virtual experiences and has the potential to create more immersive scenarios for applications by enforcing the perception of danger or improving situational awareness in training simulations, or for psychotherapy by providing additional physical stimuli.

受賞
Honorable Mention
著者
Markus Tatzgern
Salzburg University of Applied Sciences, Puch, Austria
Michael Domhardt
Salzburg University of Applied Sciences, Puch, Salzburg, Austria
Martin Wolf
Salzburg University of Applied Sciences, Puch, Austria
Michael Cenger
Salzburg University of Applied Sciences, Puch, Austria
Gerlinde Emsenhuber
Salzburg University of Applied Sciences, Puch, Austria
Radomir Dinic
Salzburg University of Applied Sciences, Puch, Austria
Nathalie Gerner
Paracelsus Medical University, Salzburg, Austria
Arnulf Hartl
Paracelsus Medical University, Salzburg, Austria
論文URL

https://dl.acm.org/doi/abs/10.1145/3491102.3502090

動画
Mouth Haptics in VR using a Headset Ultrasound Phased Array
要旨

Today’s consumer virtual reality (VR) systems offer limited haptic feedback via vibration motors in handheld controllers. Rendering haptics to other parts of the body is an open challenge, especially in a practical and consumer-friendly manner. The mouth is of particular interest, as it is a close second in tactile sensitivity to the fingertips, offering a unique opportunity to add fine-grained haptic effects. In this research, we developed a thin, compact, beamforming array of ultrasonic transducers, which can render haptic effects onto the mouth. Importantly, all components are integrated into the headset, meaning the user does not need to wear an additional accessory, or place any external infrastructure in their room. We explored several effects, including point impulses, swipes, and persistent vibrations. Our haptic sensations can be felt on the lips, teeth and tongue, which can be incorporated into new and interesting VR experiences.

受賞
Best Paper
著者
Vivian Shen
Carnegie Mellon University, Pittsburgh, Pennsylvania, United States
Craig Shultz
Carnegie Mellon University, Pittsburgh, Pennsylvania, United States
Chris Harrison
Carnegie Mellon University, Pittsburgh, Pennsylvania, United States
論文URL

https://dl.acm.org/doi/abs/10.1145/3491102.3501960

動画
What Could Possibly Go Wrong When Interacting with Proactive Smart Speakers? A Case Study Using an ESM Application
要旨

Voice user interfaces (VUIs) have made their way into people's daily lives, from voice assistants to smart speakers. Although VUIs typically just react to direct user commands, increasingly, they incorporate elements of proactive behaviors. In particular, proactive smart speakers have the potential for many applications, ranging from healthcare to entertainment; however, their usability in everyday life is subject to interaction errors. To systematically investigate the nature of errors, we designed a voice-based Experience Sampling Method (ESM) application to run on proactive speakers. We captured 1,213 user interactions in a 3-week field deployment in 13 participants' homes. Through auxiliary audio recordings and logs, we identify substantial interaction errors and strategies that users apply to overcome those errors. We further analyze the interaction timings and provide insights into the time cost of errors. We find that, even for answering simple ESMs, interaction errors occur frequently and can hamper the usability of proactive speakers and user experience. Our work also identifies multiple facets of VUIs that can be improved in terms of the timing of speech.

著者
Jing Wei
University of Melbourne, Melbourne, Victoria, Australia
Benjamin Tag
University of Melbourne, Melbourne, Victoria, Australia
Johanne R. Trippas
The University of Melbourne, Melbourne, Victoria, Australia
Tilman Dingler
University of Melbourne, Melbourne, Victoria, Australia
Vassilis Kostakos
University of Melbourne, Melbourne, Victoria, Australia
論文URL

https://dl.acm.org/doi/abs/10.1145/3491102.3517432

動画
Lattice Menu: A Low-Error Gaze-Based Marking Menu Utilizing Target-Assisted Gaze Gestures on a Lattice of Visual Anchors
要旨

We present Lattice Menu, a gaze-based marking menu utilizing a lattice of visual anchors that helps perform accurate gaze pointing for menu item selection. Users who know the location of the desired item can leverage target-assisted gaze gestures for multilevel item selection by looking at visual anchors over the gaze trajectories. Our evaluation showed that Lattice Menu exhibits a considerably low error rate (~1%) and a quick menu selection time (1.3-1.6 s) for expert usage across various menu structures (4 × 4 × 4 and 6 × 6 × 6) and sizes (8, 10 and 12°). In comparison with a traditional gaze-based marking menu that does not utilize visual targets, Lattice Menu showed remarkably (~5 times) fewer menu selection errors for expert usage. In a post-interview, all 12 subjects preferred Lattice Menu, and most subjects (8 out of 12) commented that the provisioning of visual targets facilitated more stable menu selections with reduced eye fatigue.

著者
Taejun Kim
School of Computing, KAIST, Daejeon, Korea, Republic of
Auejin Ham
KAIST, Daejeon, Korea, Republic of
Sunggeun Ahn
KAIST, Daejeon, Korea, Republic of
Geehyuk Lee
School of Computing, KAIST, Daejeon, Korea, Republic of
論文URL

https://dl.acm.org/doi/abs/10.1145/3491102.3501977

動画