Collaborative/Shared XR

会議の名前
CHI 2026
Anticipation Without Acceleration: Benefits of Shared Gaze in Collocated Augmented Reality Collaboration
要旨

Knowing what collaborators attend to is essential. Previous studies demonstrated that shared gaze enhances coordination and social connectedness in remote settings. In collocated settings, gaze can be both naturally observable and technologically augmented. AR enables gaze cues to be rendered explicitly in the environment. To investigate if and how such cues are beneficial in collocated AR collaboration, we examined both qualitative and quantitative effects across three task types (puzzle, negotiation, search) and two spatial setups (plane, room), focusing on task completion time and the collaborative experience. In our user study with 24 dyads (n=48), we varied gaze visibility and measured task performance, user preference, social connectedness, and shared attention. Our results show that sharing gaze in collocated collaborative AR can increase shared attention, is perceived as helpful, and improves the user experience, similar to remote collaboration, but has a limited impact on the actual task completion time across the chosen tasks.

著者
Julian Rasch
LMU Munich, Munich, Germany
Vladislav Dmitrievic Rusakov
LMU Munich, Munich, Germany
Jan Leusmann
LMU Munich, Munich, Germany
Florian Müller
TU Darmstadt, Darmstadt, Germany
Albrecht Schmidt
LMU Munich, Munich, Germany
Glass Chirolytics: Reciprocal Compositing and Shared Gestural Control for Face-to-Face Collaborative Visualization at a Distance
要旨

Videoconference conversations about data often entail screen sharing visualization artifacts, in which nonverbal communication goes largely ignored. Beyond presentation use cases, conversations supported by visualization also arise in collaborative decision making, technical interviews, and tutoring: use cases that benefit from participants being able to see one another as they exchange questions about the data. In this paper, we employ a reciprocal compositing of visualization and interface widgets over the mirrored video of one's conversation partner, suggestive of a pane of glass, in which both parties can simultaneously manipulate composited elements via bimanual gestures. We demonstrate our approach with implementations of several visualization interfaces spanning the aforementioned use cases, and we evaluate our approach in a study (N = 16) comparing it to videoconferencing while using a mouse to interact with a collaborative web application. Our findings suggest that our approach promotes feelings of presence and mutual awareness of analytical intent.

著者
Dion Barja
University of Manitoba, Winnipeg, Manitoba, Canada
Matthew Brehmer
University of Waterloo, Waterloo, Ontario, Canada
動画
DanXeReflect: Interacting with the Spatio-Temporal Past Movements for Embodied, Reflective Choreographic Collaboration
要旨

Choreographic reflection relies on iterative dialogue, where dancers and choreographers refine movement through embodied demonstration and shared feedback in studio rehearsal. With the shift to video, this exchange becomes constrained: annotations detach from the body, gestures lose spatial grounding, and subtle variations are difficult to capture. Advances in markerless motion capture enable 3D reconstruction from rehearsal video, allowing past recordings to be re-materialized for embodied interaction in XR. We present DanXeReflect, an XR system that transforms flat video into a virtual studio where movements appear as interactive avatars. Users can re-enact poses to search sequences, perform alternative revisions alongside originals, and attach annotations directly to body parts. A study with choreographers and dancers shows how these embodied interactions reposition spatio-temporal data as collaborative anchors, extending reflective dialogue beyond co-located rehearsal into asynchronous, distributed practice.

受賞
Honorable Mention
著者
Hyunju Kim
Cornell University, Ithaca, New York, United States
Francois Guimbretiere
Cornell University, Ithaca, New York, United States
Bokyung Lee
Yonsei University, Seoul, Korea, Republic of
Capability at a Glance: Design Guidelines for Intuitive Avatars Communicating Augmented Actions in Virtual Reality
要旨

Virtual Reality (VR) enables users to engage with capabilities beyond human limitations, but it is not always obvious how to trigger these capabilities. Taking the lens of Affordance, we believe avatar design is the key to solving this issue, which ideally should communicate its capabilities and how to activate them. To understand the current practice, we selected eight capabilities across four categories and invited twelve professional designers to design avatars that communicate the capabilities and their corresponding interactions. From the resulting designs, we formed 16 guidelines to provide general and category-specific recommendations. Then, we validated these guidelines by letting two groups of twelve participants design avatars with and without guidelines. Participants rated the guidelines’ clarity and usefulness highly. External judges confirmed that avatars designed with the guidelines were more intuitive in conveying the capabilities and interaction methods. Finally, we demonstrated the applicability of the guidelines in avatar design for four VR applications.

著者
Yang Lu
Zhejiang University, Hangzhou, Zhejiang, China
Tianyu Zhang
University of Rochester, Rochester, New York, United States
Jiamu Tang
University of rochester, rochester, New York, United States
Yanna Lin
The Hong Kong University of Science and Technology, Hong Kong, China
Jiankun Yang
University of Rochester, Rochester, New York, United States
Longyu Zhang
College of Computer Science and Technology, Hangzhou, China
Shijian Luo
Zhejiang University, Hangzhou, Zhejiang, China
Yukang Yan
University of Rochester, Rochester, New York, United States
Meme, Myself and AR: Exploring Memes Sharing in Face-to-face Conversation using Augmented Reality
要旨

Internet memes are central to online communication, yet their visual humor is often lost in face-to-face (F2F) conversations. Augmented reality (AR) offers new ways to bring memes into F2F interactions, but it is unclear how memes can be integrated into F2F conversations using AR, and how they impact conversational dynamics. We surveyed meme users (N=29) to understand motivations and challenges in visualising memes in F2F conversations. With these insights, we developed an AR meme-sharing prototype and invited 12 pairs of friends to design AR visualizations for their memes and use them in conversations. Our analysis reveals two AR-unique visualizations: merging memes with one's body (The-Meme-On-Me) and situating oneself in meme environment (Me-In-The-Meme). We observed two integration patterns: using speech as setup before a meme punchline, and showing memes simultaneously with speech to amplify humor. We report users’ reactions toward AR memes, showing how it enables playful social interaction.

著者
Yanni Mei
TU Darmstadt, Darmstadt, Germany
Samuel Wendt
Technical University Darmstadt, Darmstadt, Hesse, Germany
Florian Müller
TU Darmstadt, Darmstadt, Germany
Jan Gugenheimer
TU-Darmstadt, Darmstadt, Germany
DoubleMe: Local Blending in Multi-Display Environments with Augmented Reality to Facilitate Co-Located Collaboration
要旨

Co-located collaboration often raises challenges related to physical constraints because of , e.g., stationary display setups or limited possibilities of movement. We introduce DoubleMe, an Augmented Reality system that generates virtual duplicates of collaborators' workspaces, comprising both their displays and avatars. With DoubleMe, users maintain the layout of their own physical workspace and can position the duplicate of their collaborator's workspace nearby. This approach alleviates spatial constraints by enabling one user to join another's workspace without leaving their own. We report on two experiments examining the effectiveness of this approach. The first experiment investigates how avatar appearance and interaction influence user comfort and relationship dynamics. The second experiment assesses the performance benefits of duplicates over traditional co-located setups for collaborative tasks. Our findings suggest that the addition of duplicates to physical presence can enhance co-located collaboration while improving comfort.

著者
Arthur Fages
IRIT, Université de Toulouse, Toulouse, France, Toulouse, France
Caroline Appert
Université Paris-Saclay, CNRS, Inria, Orsay, France
Olivier Chapuis
Université Paris-Saclay, CNRS, Inria, Orsay, France
Mapping Design Dimensions for Collaborative Learning in Virtual Reality: A Scoping Review
要旨

Despite growing interest in multiuser virtual reality (VR) for education, evidence-based guidelines for designing effective collaborative VR learning experiences remain limited. This scoping review analyzed 23 empirical studies of collaborative learning in head-mounted display VR environments, exploring how contextual factors and technological affordances — including collaboration modality and system symmetry — shape activity design. We identified six distinct patterns of activities and analyzed the application of Computer-Supported Collaborative Learning (CSCL) scripts to support collaboration. Findings highlight predominant use of play-level (48%) and scene-level (48%) CSCL scripts, with minimal scriptlet-level implementation. Analysis of relationships between design dimensions, activity patterns, and collaboration supports reveals three fundamental design tensions: structured scaffolding versus flexible social interaction, role asymmetry versus technological symmetry, and shared physical presence versus distributed collaboration. This work contributes empirical foundations for collaborative VR learning design, while identifying gaps, design implications, and opportunities for advancing both HCI research and educational practice in immersive environments.

著者
Michelle Lui
University of Toronto, Toronto, Ontario, Canada
Yuqi Wang
University of Toronto, Toronto, Ontario, Canada
Joy Jiaying Yu
Ontario Institute for Studies in Education (OISE), University of Toronto, Toronto, Ontario, Canada
Chloe Lok
Ontario Institute for Studies in Education (OISE), University of Toronto, Toronto, Ontario, Canada