Hand and Gaze

会議の名前
CHI 2024
GazePointAR: A Context-Aware Multimodal Voice Assistant for Pronoun Disambiguation in Wearable Augmented Reality
要旨

Voice assistants (VAs) like Siri and Alexa are transforming human-computer interaction; however, they lack awareness of users' spatiotemporal context, resulting in limited performance and unnatural dialogue. We introduce GazePointAR, a fully-functional context-aware VA for wearable augmented reality that leverages eye gaze, pointing gestures, and conversation history to disambiguate speech queries. With GazePointAR, users can ask "what's over there?" or "how do I solve this math problem?" simply by looking and/or pointing. We evaluated GazePointAR in a three-part lab study (N=12): (1) comparing GazePointAR to two commercial systems, (2) examining GazePointAR's pronoun disambiguation across three tasks; (3) and an open-ended phase where participants could suggest and try their own context-sensitive queries. Participants appreciated the naturalness and human-like nature of pronoun-driven queries, although sometimes pronoun use was counter-intuitive. We then iterated on GazePointAR and conducted a first-person diary study examining how GazePointAR performs in-the-wild. We conclude by enumerating limitations and design considerations for future context-aware VAs.

著者
Jaewook Lee
University of Washington, Seattle, Washington, United States
Jun Wang
University of Washington, Seattle, Washington, United States
Elizabeth Brown
University of Washington, Seattle, Washington, United States
Liam Chu
University of Washington, Seattle, Washington, United States
Sebastian S.. Rodriguez
University of Illinois at Urbana-Champaign, Urbana, Illinois, United States
Jon E.. Froehlich
University of Washington, Seattle, Washington, United States
論文URL

https://doi.org/10.1145/3613904.3642230

動画
QuadStretcher: A Forearm-Worn Skin Stretch Display for Bare-Hand Interaction in AR/VR
要旨

The paradigm of bare-hand interaction has become increasingly prevalent in Augmented Reality (AR) and Virtual Reality (VR) environments, propelled by advancements in hand tracking technology. However, a significant challenge arises in delivering haptic feedback to users’ hands, due to the necessity for the hands to remain bare. In response to this challenge, recent research has proposed an indirect solution of providing haptic feedback to the forearm. In this work, we present QuadStretcher, a skin stretch display featuring four independently controlled stretching units surrounding the forearm. While achieving rich haptic expression, our device also eliminates the need for a grounding base on the forearm by using a pair of counteracting tactors, thereby reducing bulkiness. To assess the effectiveness of QuadStretcher in facilitating immersive barehand experiences, we conducted a comparative user evaluation (n = 20) with a baseline solution, Squeezer. The results confirmed that QuadStretcher outperformed Squeezer in terms of expressing force direction and heightening the sense of realism, particularly in 3-DoF VR interactions such as pulling a rubber band, hooking a fishing rod, and swinging a tennis racket. We further discuss the design insights gained from qualitative user interviews, presenting key takeaways for future forearm-haptic systems aimed at advancing AR/VR bare-hand experiences.

著者
Taejun Kim
School of Computing, KAIST, Daejeon, Korea, Republic of
Youngbo Aram. Shim
KAIST, Daejeon, Korea, Republic of
YoungIn Kim
School of Computing, KAIST, Daejeon, Korea, Republic of
Sunbum Kim
School of Computing, KAIST, Daejeon, Korea, Republic of
Jaeyeon Lee
UNIST, Ulsan, Korea, Republic of
Geehyuk Lee
School of Computing, KAIST, Daejeon, Korea, Republic of
論文URL

https://doi.org/10.1145/3613904.3642067

動画
ArmDeformation: Inducing the Sensation of Arm Deformation in Virtual Reality Using Skin-Stretching
要旨

With the development of virtual reality (VR) technology, research is being actively conducted on how incorporating multisensory feedback can create the illusion that virtual avatars are perceived as an extension of the body in VR. In line with this research direction, we introduce ArmDeformation, a wearable device employing skin-stretching to enhance virtual forearm ownership during arm deformation illusion. We conducted five user studies with 98 participants. Using a developed tabletop device, we confirmed the optimal number of actuators and the ideal skin-stretching design effectively increases the user's body ownership. Additionally, we explored the maximum visual threshold for forearm bending and the minimum detectable bending direction angle when using skin-stretching in VR. Finally, our study demonstrates that using ArmDeformation in VR applications enhances user realism and enjoyment compared to relying on visual feedback alone.

著者
Yilong Lin
Southern University of Science and Technology, Shenzhen, China
Peng Zhang
Southern University of Science and Technology, Shenzhen, China
Eyal Ofek
Microsoft Research, Redmond, Washington, United States
Seungwoo Je
SUSTech, Shenzhen, China
論文URL

https://doi.org/10.1145/3613904.3642518

動画
CLERA: A Unified Model for Joint Cognitive Load and Eye Region Analysis in the Wild
要旨

Non-intrusive, real-time analysis of the dynamics of the eye region allows us to monitor humans’ visual attention allocation and estimate their mental state during the performance of real-world tasks, which can potentially benefit a wide range of human-computer interaction (HCI) applications. While commercial eye-tracking devices have been frequently employed, the difficulty of customizing these devices places unnecessary constraints on the exploration of more efficient, end-to-end models of eye dynamics. In this work, we propose CLERA, a unified model for Cognitive Load and Eye Region Analysis, which achieves precise keypoint detection and spatiotemporal tracking in a joint-learning framework. Our method demonstrates significant efficiency and outperforms prior work on tasks including cognitive load estimation, eye landmark detection, and blink estimation. We also introduce a large-scale dataset of 30k human faces with joint pupil, eye-openness, and landmark annotation, which aims at supporting future HCI research on human factors and eye-related analysis.

著者
Li Ding
Umass Amherst, Amherst, Massachusetts, United States
Jack Terwilliger
University of California San Diego, La Jolla, California, United States
Aishni Parab
University of California, Los Angeles, Los Angeles, California, United States
Meng Wang
UMass Amherst, Amherst, Massachusetts, United States
Lex Fridman
MIT, Cambridge, Massachusetts, United States
Bruce Mehler
Massachusetts Institute of Technology, Cambridge, Massachusetts, United States
Bryan Reimer
MIT, Cambridge, Massachusetts, United States
動画
How Gaze Visualization Facilitates Initiation of Informal Communication in 3D Virtual Spaces
要旨

This study explores how gaze visualization in virtual spaces facilitates the initiation of informal communication. Three styles of gaze cue visualization (arrow, bubbles, and miniature avatar) with two types of gaze behavior (one-sided gaze and joint gaze) were evaluated. 96 participants used either a non-visualized gaze cue or one of the three visualized gaze cues. The results showed that all visualized gaze cues facilitated the initiation of informal communication more effectively than the non-visualized gaze cue. For one-sided gaze, overall, bubbles had more positive effects on the gaze receiver’s behaviors and experiences than the other two visualized gaze cues, although the only statistically significant difference was in the verbal reaction rates. For joint gaze, all three visualized gaze cues had positive effects on the receiver’s behaviors and experiences. The design implications of the gaze visualization and the confederate-based evaluation method contribute to research on informal communication and social virtual reality.

著者
Junko Ichino
Tokyo City University, Yokohama, Japan
Masahiro Ide
Tokyo City University, Yokohama, Japan
Takehito Yoshiki
TIS Inc., Shinjuku, Tokyo, Japan
Hitomi Yokoyama
Okayama University of Science, Okayama, Japan
Hirotoshi Asano
Kogakuin University, Shinjyuku, Tokyo, Japan
Hideo Miyachi
Tokyo City University, Yokohama, Japan
daisuke okabe
Tokyo City University, Yokohama, Kanagawa, Japan
動画