Large curved displays are ideal for viewing 360° content, such as 3D maps, but typically restrict users to a 180° viewport, leaving information off-screen. Since users naturally direct their heads toward regions on-screen before interacting, head movements offer a promising alternative for workspace manipulation to bring off-screen content into view. We explore rate control functions (linear, sigmoid, polynomial) and zone control functions (continuous, friction, interrupted, additive) to translate head rotations into workspace control, enabling users to access off-screen content. Polynomial rate control emerges as the best choice, achieving the fastest trial times and highest subjective ratings. Using a map navigation task, our second study demonstrates that users perform better with the polynomial head-based technique than with the industry-standard controller-based methods, click-and-drag and joystick-push, for 360° workspace navigation. Based on these findings, we provide guidelines to inform the design of future 360° workspace navigation techniques for large curved displays.
Eye strain presents a significant challenge in human-information interaction in virtual reality (VR), as prolonged exposure contributes to various eye problems. This study introduces a new gaze redirection method called EyeXRciser, which passively activates eye movements to help prevent eye muscle stiffness during VR reading. This method achieves gaze redirection by slowly shifting the relative position of the text window within the user’s field of view through a head-bound coordinate system. We implemented our method with two different redirection speeds (i.e., unnoticeable speed at 0.03 $rad/s$; noticeable speed at 0.12 rad/s) and conducted a user study (N=24) comparing with a baseline using a fixed text window. Results show that both our methods successfully minimized the decline in accommodative ability caused by prolonged reading, without negatively impacting reading comprehension. Results also show that the unnoticeable redirection speed produced less subjective discomfort, eye fatigue, and reading distraction than the noticeable speed.
Immersive videos (IVs) provide 360° environments that create a strong sense of presence and spatial exploration. Unlike traditional videos, IVs distribute information across multiple directions, making comparison cognitively demanding and highly dependent on interaction techniques. With the growing adoption of IVs, effective comparison techniques have become an essential yet underexplored area of research. Inspired by the "sliding" concept in 2D media comparison, we integrate two established comparison strategies from the literature—toggle and side-by-side—to support IV comparison with greater flexibility. For an in-depth understanding of different strategies, we adapt and implement five IV comparison techniques across VR and 2D environments: SlideInVR, ToggleInVR, SlideIn2D, ToggleIn2D, and SideBySideIn2D. We then conduct a user study (N=20) to examine how these techniques shape users' perceptions, strategies, and workflows. Our findings provide empirical insights into the strengths and limitations of each technique, underscoring the need to switch between comparison approaches across scenarios. Notably, participants consistently rate SlideInVR and SlideIn2D as the most flexible and favorite methods for IV comparison.
Conventional Mixed Reality (MR) workspaces are frequently organized in cockpit-like layouts, where multiple floating windows surround the user. While this configuration facilitates access to digital content, it often induces occlusion, reducing understanding of the physical environment and limiting access to real-world objects. To overcome this challenge, we present the Contour-Adaptive Mixed Environment Overlays (CAMEO), a contour-adaptive MR interface that drapes virtual windows onto physical surfaces. This design integrates digital content with nearby items, thereby improving users’ visual access to background objects and supporting interaction with them. We evaluate CAMEO in two controlled studies. The first demonstrates that draping reduces hand-movement detours relative to flat mid-air surfaces, enabling more direct interaction with nearby items. The second shows that controlled window deformation does not significantly impair text legibility when compared to flat surfaces. Together, these findings contribute a novel design paradigm for MR workspaces that balances immersion, readability, and environmental understanding.
Extended Reality (XR) headsets enable large, reconfigurable multi-display workspaces and support view manipulation, allowing the workspace to reposition itself around the user. Cursor warping similarly reduces traversal distance and pointer search by reinitialising the cursor at defined locations. Yet when both mechanisms operate together, the spatial relationship between user, displays, and cursor becomes dynamic, and it remains unclear how cursor repositioning behaves when the workspace itself moves.
In a study (N=20) of five cursor-warping strategies with two view manipulations, we show that the benefits of both do not automatically combine: workspace motion can disrupt spatial consistency and alter both performance and movement costs. We show that continuous cursor movement in world space is limited compared to alternative warping techniques, and cursor behaviour and view control are tightly coupled. Hence, cursor initialisation and view manipulation must be co-designed to support efficient and comfortable interaction in XR multi-display environments.
Effective conceptual design collaboration requires teams to build shared mental models (SMMs). Although Extended Reality (XR) technologies support design collaboration, they often lack structured cognition support for such alignment. To address this, we conducted this research within the sandbox of automotive design, and firstly identified key cognitive challenges in its collaboration. We then developed XSynth, a GenAI-powered XR system grounded in Concept–Knowledge Theory. XSynth scaffolds designers’ reasoning, externalizes individual mental models as knowledge graphs, and merges them into a unified graph to facilitate SMMs building. We evaluated XSynth in a within-subject experiment containing 10 design teams (N=30) using mixed-method approach. Results showed that XSynth significantly reduced workload, enhanced creativity support, strengthened perceived SMMs, and improved design performance. This research contributes to HCI by introducing the design and implementation of a theory-grounded, GenAI-powered, XR-based cognition support tool. It also offers empirical evidence into the effectiveness of XSynth, and design implications for future cognition support tools in collaborative settings.
We present the Finger-Mounted Extending Rod, a wearable device that transforms fingers into virtual tools by modulating fingertip mass distribution. We employ linear actuators on fingers that extend or retract metal rods according to their poses, generating rotational inertia while redirecting the hand to natural grip postures. Through three user studies, we evaluate (1) finger pose embodiment under visual redirection and tool matching via inertia tensor similarity, (2) perception of tool length and rotational inertia, and (3) VR tool interaction experience. Results show that 10 of 15 finger poses maintained embodiment, exhibiting inertia tensor similarities of 0.936–0.991 with their matched tools and yielding perceived inertia amplifications of 4.19–10.45×; moreover, aligning inertia tensors to virtual tools enhanced immersion, realism, and enjoyment compared to misaligned or no-device conditions across six VR scenarios. We conclude by discussing how the system renders virtual tools through the fingers and enhances their perception with inertia modulation.