Immersive Experiences: UIs and Personalisation

会議の名前
CHI 2024
UI Mobility Control in XR: Switching UI Positionings between Static, Dynamic, and Self Entities
要旨

Extended reality (XR) has the potential for seamless user interface (UI) transitions across people, objects, and environments. However, the design space, applications, and common practices of 3D UI transitions remain underexplored. To address this gap, we conducted a need-finding study with 11 participants, identifying and distilling a taxonomy based on three types of UI placements --- affixed to static, dynamic, or self entities. We further surveyed 113 commercial applications to understand the common practices of 3D UI mobility control, where only 6.2% of these applications allowed users to transition UI between entities. In response, we built interaction prototypes to facilitate UI transitions between entities. We report on results from a qualitative user study (N=14) on 3D UI mobility control using our FingerSwitches technique, which suggests that perceived usefulness is affected by types of entities and environments. We aspire to tackle a vital need in UI mobility within XR.

著者
Siyou Pei
University of California Los Angeles, Los Angeles, California, United States
David Kim
Google Research, Zurich, Switzerland
Alex Olwal
Google Research, Mountain View, California, United States
Yang Zhang
University of California Los Angeles, Los Angeles, California, United States
Ruofei Du
Google Research, San Francisco, California, United States
論文URL

doi.org/10.1145/3613904.3642220

動画
ProInterAR: A Visual Programming Platform for Creating Immersive AR Interactions
要旨

AR applications commonly contain diverse interactions among different AR contents. Creating such applications requires creators to have advanced programming skills for scripting interactive behaviors of AR contents, repeated transferring and adjustment of virtual contents from virtual to physical scenes, testing by traversing between desktop interfaces and target AR scenes, and digitalizing AR contents. Existing immersive tools for prototyping/authoring such interactions are tailored for domain-specific applications. To support programming general interactive behaviors of real object(s)/environment(s) and virtual object(s)/environment(s) for novice AR creators, we propose ProInterAR, an integrated visual programming platform to create immersive AR applications with a tablet and an AR-HMD. Users can construct interaction scenes by creating virtual contents and augmenting real contents from the view of an AR-HMD, script interactive behaviors by stacking blocks from a tablet UI, and then execute and control the interactions in the AR scene. We showcase a wide range of AR application scenarios enabled by ProInterAR, including AR game, AR teaching, sequential animation, AR information visualization, etc. Two usability studies validate that novice AR creators can easily program various desired AR applications using ProInterAR.

著者
Hui Ye
City University of Hong Kong, Hong Kong, Hong Kong
Jiaye Leng
City University of HongKong, HongKong, China
Pengfei Xu
Shenzhen University, Shenzhen, Guangdong, China
Karan Singh
University of Toronto, Toronto, Ontario, Canada
Hongbo Fu
City University of Hong Kong, Beijing, Beijing, China
論文URL

doi.org/10.1145/3613904.3642527

動画
MineXR: Mining Personalized Extended Reality Interfaces
要旨

Extended Reality (XR) interfaces offer engaging user experiences, but their effective design requires a nuanced understanding of user behavior and preferences. This knowledge is challenging to obtain without the widespread adoption of XR devices. We introduce MineXR, a design mining workflow and data analysis platform for collecting and analyzing personalized XR user interaction and experience data. MineXR enables elicitation of personalized interfaces from participants of a data collection: for any particular context, participants create interface elements using application screenshots from their own smartphone, place them in the environment, and simultaneously preview the resulting XR layout on a headset. Using MineXR, we contribute a dataset of personalized XR interfaces collected from 31 participants, consisting of 695 XR widgets created from 178 unique applications. We provide insights for XR widget functionalities, categories, clusters, UI element types, and placement. Our open-source tools and data support researchers and designers in developing future XR interfaces.

著者
Hyunsung Cho
Carnegie Mellon University, Pittsburgh, Pennsylvania, United States
Yukang Yan
Carnegie Mellon University, Pittsburgh, Pennsylvania, United States
Kashyap Todi
Reality Labs Research, Redmond, Washington, United States
Mark Parent
Meta, Toronto, Ontario, Canada
Missie Smith
Reality Labs Research, Redmond, Washington, United States
Tanya R.. Jonker
Meta Inc., Redmond, Washington, United States
Hrvoje Benko
Meta Inc., Redmond, Washington, United States
David Lindlbauer
Carnegie Mellon University, Pittsburgh, Pennsylvania, United States
論文URL

doi.org/10.1145/3613904.3642394

動画
VirtuWander: Enhancing Multi-modal Interaction for Virtual Tour Guidance through Large Language Models
要旨

Tour guidance in virtual museums encourages multi-modal interactions to boost user experiences, concerning engagement, immersion, and spatial awareness. Nevertheless, achieving the goal is challenging due to the complexity of comprehending diverse user needs and accommodating personalized user preferences. Informed by a formative study that characterizes guidance-seeking contexts, we establish a multi-modal interaction design framework for virtual tour guidance. We then design VirtuWander, a two-stage innovative system using domain-oriented large language models to transform user inquiries into diverse guidance-seeking contexts and facilitate multi-modal interactions. The feasibility and versatility of VirtuWander are demonstrated with virtual guiding examples that encompass various touring scenarios and cater to personalized preferences. We further evaluate VirtuWander through a user study within an immersive simulated museum. The results suggest that our system enhances engaging virtual tour experiences through personalized communication and knowledgeable assistance, indicating its potential for expanding into real-world scenarios.

著者
Zhan Wang
The Hong Kong University of Science and Technology (Guangzhou), Guangzhou, China
Linping Yuan
The Hong Kong University of Science and Technology, Hong Kong, China
Liangwei Wang
The Hong Kong University of Science and Technology (Guangzhou), Guangzhou, China
Bingchuan Jiang
Information Engineering University, Zheng Zhou, China
Wei Zeng
The Hong Kong University of Science and Technology (Guangzhou), Guangzhou, Guangdong, China
論文URL

doi.org/10.1145/3613904.3642235

動画