Movement and Motor Learning B

会議の名前
CHI 2024
Metrics of Motor Learning for Analyzing Movement Mapping in Virtual Reality
要旨

Virtual reality (VR) techniques can modify how physical body movements are mapped to the virtual body. However, it is unclear how users learn such mappings and, therefore, how the learning process may impede interaction. To understand and quantify the learning of the techniques, we design new metrics explicitly for VR interactions based on the motor learning literature. We evaluate the metrics in three object selection and manipulation tasks, employing linear-translational and nonlinear-rotational gains and finger-to-arm mapping. The study shows that the metrics demonstrate known characteristics of motor learning similar to task completion time, typically with faster initial learning followed by more gradual improvements over time. More importantly, the metrics capture learning behaviors that task completion time does not. We discuss how the metrics can provide new insights into how users adapt to movement mappings and how they can help analyze and improve such techniques.

受賞
Honorable Mention
著者
Difeng Yu
University of Copenhagen, Copenhagen, Denmark
Mantas Cibulskis
University of Copenhagen, Copenhagen, Denmark
Erik Skjoldan. Mortensen
University of Copenhagen, Copenhagen, Denmark
Mark Schram Christensen
University of Copenhagen, Copenhagen, Denmark
Joanna Bergström
University of Copenhagen, Copenhagen, Denmark
論文URL

https://doi.org/10.1145/3613904.3642354

動画
WieldingCanvas: Interactive Sketch Canvases for Freehand Drawing in VR
要旨

Sketching in Virtual Reality (VR) is challenging mainly due to the absence of physical surface support and virtual depth perception cues, which induce high cognitive and sensorimotor load. This paper presents WieldingCanvas, an interactive VR sketching platform that integrates canvas manipulations to draw lines and curves in 3D. Informed by real-life examples of two-handed creative activities. WieldingCanvas interprets users' spatial gestures to move, swing, rotate, transform, or fold a virtual canvas, whereby users simply draw primitive strokes on the canvas, which are turned into finer and more sophisticated shapes via the manipulation of the canvas. We evaluated the capability and user experience of WieldingCanvas with three studies where participants were asked to sketch target shapes. A set of freehand sketches of high aesthetic qualities were created, and the results demonstrated that WieldingCanvas can assist users with creating 3D sketches.

著者
Xiaohui Tan
Capital Normal University, Beijing, China
Zhenxuan He
Chinese Academy of Sciences, Beijing, China
Can Liu
City University of Hong Kong, Hong Kong, China
Mingming Fan
The Hong Kong University of Science and Technology (Guangzhou), Guangzhou, China
Tianren Luo
Institute of Software, Beijing, China
Zitao Liu
Jinan University, Guangzhou, Guangdong, China
Mi Tian
TAL Education Group, Beijing, China
Teng Han
Institute of Software, Chinese Academy of Sciences, Beijing, China
Feng Tian
Institute of software, Chinese Academy of Sciences, Beijing, China
論文URL

https://doi.org/10.1145/3613904.3642047

動画
Better Definition and Calculation of Throughput and Effective Parameters for Steering to Account for Subjective Speed-accuracy Tradeoffs
要旨

In Fitts' law studies to investigate pointing, throughput is used to characterize the performance of input devices and users, which is claimed to be independent of task difficulty or the user's subjective speed-accuracy bias. While throughput has been recognized as a useful metric for target-pointing tasks, the corresponding formulation for path-steering tasks and its evaluation have not been thoroughly examined in the past. In this paper, we conducted three experiments using linear, circular, and sine-wave path shapes to propose and investigate a novel formulation for the effective parameters and the throughput of steering tasks. Our results show that the effective width substantially improves the fit to data with mixed speed-accuracy biases for all task shapes. Effective width also smoothed out the throughput across all biases, while the usefulness of the effective amplitude depended on the task shape. Our study thus advances the understanding of user performance in trajectory-based tasks.

著者
Nobuhito Kasahara
Meiji University, Tokyo, Japan
Yosuke Oba
Meiji University, Tokyo, Japan
Shota Yamanaka
Yahoo Japan Corporation, Tokyo, Japan
Anil Ufuk Batmaz
Concordia University, Montreal, Quebec, Canada
Wolfgang Stuerzlinger
Simon Fraser University, Vancouver, British Columbia, Canada
Homei Miyashita
Meiji University, Tokyo, Japan
論文URL

https://doi.org/10.1145/3613904.3642084

動画
Design Space of Visual Feedforward And Corrective Feedback in XR-Based Motion Guidance Systems
要旨

Extended reality (XR) technologies are highly suited in assisting individuals in learning motor skills and movements---referred to as motion guidance. In motion guidance, the ``feedforward’’ provides instructional cues of the motions that are to be performed, whereas the ``feedback’’ provides cues which help correct mistakes and minimize errors. Designing synergistic feedforward and feedback is vital to providing an effective learning experience, but this interplay between the two has not yet been adequately explored. Based on a survey of the literature, we propose design space for both motion feedforward and corrective feedback in XR, and describe the interaction effects between them. We identify common design approaches of XR-based motion guidance found in our literature corpus, and discuss them through the lens of our design dimensions. We then discuss additional contextual factors and considerations that influence this design, together with future research opportunities for motion guidance in XR.

著者
Xingyao Yu
University of Stuttgart, Stuttgart, Germany
Benjamin Lee
University of Stuttgart, Stuttgart, Germany
Michael Sedlmair
University of Stuttgart, Stuttgart, Germany
論文URL

https://doi.org/10.1145/3613904.3642143

動画