注目の論文一覧

各カテゴリ上位30論文までを表示しています

The ACM CHI Conference on Human Factors in Computing Systems (https://chi2021.acm.org/)

4
Touch&Fold: A Foldable Haptic Actuator for Rendering Touch in Mixed Reality
Shan-Yuan Teng (University of Chicago, Chicago, Illinois, United States)Pengyu Li (University of Chicago, Chicago, Illinois, United States)Romain Nith (University of Chicago, Chicago, Illinois, United States)Joshua Fonseca (University of Chicago, Chicago, Illinois, United States)Pedro Lopes (University of Chicago, Chicago, Illinois, United States)
We propose a nail-mounted foldable haptic device that provides tactile feedback to mixed reality (MR) environments by pressing against the user’s fingerpad when a user touches a virtual object. What is novel in our device is that it quickly tucks away when the user interacts with real-world objects. Its design allows it to fold back on top of the user’s nail when not in use, keeping the user’s fingerpad free to, for instance, manipulate handheld tools and other objects while in MR. To achieve this, we engineered a wireless and self-contained haptic device, which measures 24×24×41 mm and weighs 9.5 g. Furthermore, our foldable end-effector also features a linear resonant actuator, allowing it to render not only touch contacts (i.e., pressure) but also textures (i.e., vibrations). We demonstrate how our device renders contacts with MR surfaces, buttons, low- and high-frequency textures. In our first user study, we found that participants perceived our device to be more realistic than a previous haptic device that also leaves the fingerpad free (i.e., fingernail vibration). In our second user study, we investigated the participants’ experience while using our device in a real-world task that involved physical objects. We found that our device allowed participants to use the same finger to manipulate handheld tools, small objects, and even feel textures and liquids, without much hindrance to their dexterity, while feeling haptic feedback when touching MR interfaces.
4
XRgonomics: Facilitating the Creation of Ergonomic 3D Interfaces
João Marcelo. Evangelista Belo (Aarhus University, Aarhus, Denmark)Anna Maria. Feit (ETH Zurich, Zurich, Switzerland)Tiare Feuchtner (Aarhus University, Aarhus, Denmark)Kaj Grønbæk (Aarhus University, Aarhus, Denmark)
Arm discomfort is a common issue in Cross Reality applications involving prolonged mid-air interaction. Solving this problem is difficult because of the lack of tools and guidelines for 3D user interface design. Therefore, we propose a method to make existing ergonomic metrics available to creators during design by estimating the interaction cost at each reachable position in the user's environment. We present XRgonomics, a toolkit to visualize the interaction cost and make it available at runtime, allowing creators to identify UI positions that optimize users' comfort. Two scenarios show how the toolkit can support 3D UI design and dynamic adaptation of UIs based on spatial constraints. We present results from a walkthrough demonstration, which highlight the potential of XRgonomics to make ergonomics metrics accessible during the design and development of 3D UIs. Finally, we discuss how the toolkit may address design goals beyond ergonomics.
4
ThermoCaress: A Wearable Haptic Device with Illusory Moving Thermal Stimulation
Yuhu Liu (The University of Tokyo, Tokyo, Japan)Satoshi Nishikawa (The University of Tokyo, Tokyo, Japan)Young ah Seong (Hosei University, Tokyo, Japan)Ryuma Niiyama (The University of Tokyo, Tokyo, Japan)Yasuo Kuniyoshi (The University of Tokyo, Tokyo, Japan)
We propose ThermoCaress, a haptic device to create a stroking sensation on the forearm using pressure force and present thermal feedback simultaneously. In our method, based on the phenomenon of thermal referral, by overlapping a stroke of pressure force, users feel as if the thermal stimulation moves although the position of temperature source is static. We designed the device to be compact and soft, using microblowers and inflatable pouches for presenting pressure force and water for presenting thermal feedback. Our user study showed that the device succeeded in generating thermal referrals and creating a moving thermal illusion. The results also suggested that cold temperature enhance the pleasantness of stroking. Our findings contribute to expanding the potential of thermal haptic devices.
3
Physiological and Perceptual Responses to Athletic Avatars while Cycling in Virtual Reality
Martin Kocur (University of Regensburg, Regensburg, Germany)Florian Habler (University of Regensburg, Regensburg, Germany)Valentin Schwind (Frankfurt University of Applied Sciences, Frankfurt, Germany)Paweł W. Woźniak (Utrecht University, Utrecht, Netherlands)Christian Wolff (University of Regensburg, Regensburg, Bavaria, Germany)Niels Henze (University of Regensburg, Regensburg, Germany)
Avatars in virtual reality (VR) enable embodied experiences and induce the Proteus effect - a shift in behavior and attitude to mimic one's digital representation. Previous work found that avatars associated with physical strength can decrease users' perceived exertion when performing physical tasks. However, it is unknown if an avatar's appearance can also influence the user's physiological response to exercises. Therefore, we conducted an experiment with 24 participants to investigate the effect of avatars' athleticism on heart rate and perceived exertion while cycling in VR following a standardized protocol. We found that the avatars' athleticism has a significant and systematic effect on users' heart rate and perceived exertion. We discuss potential moderators such as body ownership and users' level of fitness. Our work contributes to the emerging area of VR exercise systems.
3
Teardrop Glasses: Pseudo Tears Induce Sadness in You and Those Around You
Shigeo Yoshida (The University of Tokyo, Tokyo, Japan)Takuji Narumi (the University of Tokyo, Tokyo, Japan)Tomohiro Tanikawa (the University of Tokyo, Tokyo, Japan)Hideaki Kuzuoka (The University of Tokyo, Bunkyo-ku, Tokyo, Japan)Michitaka Hirose (The University of Tokyo, Tokyo, Japan)
Emotional contagion is a phenomenon in which one's emotions are transmitted among individuals unconsciously by observing others' emotional expressions. In this paper, we propose a method for mediating people's emotions by triggering emotional contagion through artificial bodily changes such as pseudo tears. We focused on shedding tears because of the link to several emotions besides sadness. In addition, it is expected that shedding tears would induce emotional contagion because it is observable by others. We designed an eyeglasses-style wearable device, Teardrop glasses, that release water drops near the wearer's eyes. The drops flow down the cheeks and emulate real tears. The study revealed that artificial crying with pseudo tears increased sadness among both wearers and those observing them. Moreover, artificial crying attenuated happiness and positive feelings in observers. Our findings show that actual bodily changes are not necessary for inducing emotional contagion as artificial bodily changes are also sufficient.
3
Improving Viewing Experiences of First-Person Shooter Gameplays with Automatically-Generated Motion Effects
Gyeore Yun (POSTECH, Pohang, Korea, Republic of)Hyoseung Lee (POSTECH, Pohang, Gyeongsangbuk-do, Korea, Republic of)Sangyoon Han (Pohang University of Science and Technology (POSTECH), Pohang, Korea, Republic of)Seungmoon Choi (Pohang University of Science and Technology (POSTECH), Pohang, Gyeongbuk, Korea, Republic of)
In recent times, millions of people enjoy watching video gameplays at an eSports stadium or home. We seek a method that improves gameplay spectator or viewer experiences by presenting multisensory stimuli. Using a motion chair, we provide the motion effects automatically generated from the audiovisual stream to the viewers watching a first-person shooter (FPS) gameplay. The motion effects express the game character’s movement and gunfire action. We describe algorithms for the computation of such motion effects developed using computer vision techniques and deep learning. By a user study, we demonstrate that our method of providing motion effects significantly improves the viewing experiences of FPS gameplay. The contributions of this paper are with the motion synthesis algorithms integrated for FPS games and the empirical evidence for the benefits of experiencing multisensory gameplays.
3
Interoceptive Interaction: An Embodied Metaphor Inspired Approach to Designing for Meditation
Claudia Daudén Roquet (Lancaster University, Lancaster, United Kingdom)Corina Sas (Lancaster University, Lancaster, United Kingdom)
Meditation is a mind-body practice with considerable wellbeing benefits that can take different forms. Novices usually start with focused attention meditation that supports regulation of attention towards an inward focus or internal bodily sensations and away from external stimuli or distractors. Most meditation technologies employ metaphorical mappings of meditative states to visual or soundscape representations to support awareness of mind wandering and attention regulation, although the rationale for such mappings is seldom articulated. Moreover, such external modalities also take the focus attention away from the body. We advance the concept of interoceptive interaction and employed the embodied metaphor theory to explore the design of mappings to the interoceptive sense of thermoception. We illustrate this concept with WarmMind, an on-body interface integrating heat actuators for mapping meditation states. We report on an exploratory study with 10 participants comparing our novel thermal metaphors for mapping meditation states with comparable ones, albeit in aural modality, as provided by Muse meditation app. Findings indicate a tension between the highly discoverable soundscape’s metaphors which however hinder attention regulation, and the ambiguous thermal metaphors experienced as coming from the body and supported attention regulation. We discuss the qualities of embodied metaphors underpinning this tension and propose an initial framework to inform the design of metaphorical mappings for meditation technologies.
3
Bad Breakdowns, Useful Seams, and Face Slapping: Analysis of VR Fails on YouTube
Emily Dao (Monash University, Melbourne, Victoria, Australia)Andreea Muresan (University of Copenhagen, Copenhagen, Denmark)Kasper Hornbæk (University of Copenhagen, Copenhagen, Denmark)Jarrod Knibbe (University of Melbourne, Melbourne, Australia)
Virtual reality (VR) is increasingly used in complex social and physical settings outside of the lab. However, not much is known about how these settings influence use, nor how to design for them. We analyse 233 YouTube videos of VR Fails to: (1) understand when breakdowns occur, and (2) reveal how the seams between VR use and the social and physical setting emerge. The videos show a variety of fails, including users flailing, colliding with surroundings, and hitting spectators. They also suggest causes of the fails, including fear, sensorimotor mismatches, and spectator participation. We use the videos as inspiration to generate design ideas. For example, we discuss more flexible boundaries between the real and virtual world, ways of involving spectators, and interaction designs to help overcome fear. Based on the findings, we further discuss the ‘moment of breakdown’ as an opportunity for designing engaging and enhanced VR experiences.
3
Experiencing Simulated Confrontations in Virtual Reality
Patrick Dickinson (University of Lincoln, Lincoln, United Kingdom)Arthur Jones (University of Lincoln, Lincoln, Lincolnshire, United Kingdom)Wayne Christian (Lincoln University, Lincoln, Lincolnshire, United Kingdom)Andrew Westerside (University of Lincoln, Lincoln, United Kingdom)Francis Mulloy (University of Lincoln, Lincoln, United Kingdom)Kathrin Gerling (KU Leuven, Leuven, Belgium)Kieran Hicks (University of Lincoln, Lincoln, Lincolnshire, United Kingdom)Liam Wilson (University of Lincoln, Lincoln, United Kingdom)Adrian Parke (University of the West of Scotland, Glasgow, United Kingdom)
The use of virtual reality (VR) to simulate confrontational human behaviour has significant potential for use in training, where the recreation of uncomfortable feelings may help users to prepare for challenging real-life situations. In this paper we present a user study (n=68) in which participants experienced simulated confrontational behaviour performed by a virtual character either in immersive VR, or on a 2D display. Participants reported a higher elevation in anxiety in VR, which correlated positively with a perceived sense of physical space. Character believability was influenced negatively by visual elements of the simulation, and positively by behavioural elements, which complements findings from previous work. We recommend the use of VR for simulations of confrontational behaviour, where a realistic emotional response is part of the intended experience. We also discuss incorporation of domain knowledge of human behaviours, and carefully crafted motion-captured sequences, to increase users' sense of believability.
3
Can Playing with Toy Blocks Reflect Behavior Problems in Children?
Xiyue Wang (Tohoku University, Sendai, Japan)Kazuki Takashima (Tohoku University, Sendai, Japan)Tomoaki Adachi (Miyagi Gakuin Women's University, Sendai, Miyagi, Japan)Yoshifumi Kitamura (Tohoku University, Sendai, Japan)
Although children’s behavioral and mental problems are generally diagnosed in clinical settings, the prediction and awareness of children’s mental wellness in daily settings are getting increased attention. Toy blocks are both accessible in most children’s daily lives and provide physicality as a unique non-verbal channel to express their inner world. In this paper, we propose a toy block approach for predicting a range of behavior problems in young children (4-6 years old) measured by the Child Behavior Checklist (CBCL). We defined and classified a set of quantitative play actions from IMU-embedded toy blocks. Play data collected from 78 preschoolers revealed that specific play actions and patterns indicate total problems, internalizing problems, and aggressive behavior in children. The results align with our qualitative observations, and suggest the potential of predicting the clinical behavior problems of children based on short free-play sessions with sensor-embedded toy blocks.
3
Mindless Attractor: A False-Positive Resistant Intervention for Drawing Attention Using Auditory Perturbation
Riku Arakawa (The University of Tokyo, Hongo, Japan)Hiromu Yakura (University of Tsukuba, Tsukuba, Japan)
Explicitly alerting users is not always an optimal intervention, especially when they are not motivated to obey. For example, in video-based learning, learners who are distracted from the video would not follow an alert asking them to pay attention. Inspired by the concept of Mindless Computing, we propose a novel intervention approach, Mindless Attractor, that leverages the nature of human speech communication to help learners refocus their attention without relying on their motivation. Specifically, it perturbs the voice in the video to direct their attention without consuming their conscious awareness. Our experiments not only confirmed the validity of the proposed approach but also emphasized its advantages in combination with a machine learning-based sensing module. Namely, it would not frustrate users even though the intervention is activated by false-positive detection of their attentive state. Our intervention approach can be a reliable way to induce behavioral change in human-AI symbiosis.
3
Flower Jelly Printer: Slit Injection Printing for Parametrically Designed Flower Jelly
Mako Miyatake (The University of Tokyo, Tokyo, Japan)Koya Narumi (The University of Tokyo, Tokyo, Japan)Yuji Sekiya (The University of Tokyo, Bunkyo-ku, Tokyo, Japan)Yoshihiro Kawahara (The university of Tokyo, Bunkyo-ku, Tokyo, Japan)
Flower jellies, a delicate dessert in which a flower-shaped jelly floats inside another clear jelly, fascinate people with both their beauty and elaborate construction. In efforts to simplify the challenging fabrication and enrich the design space of this dessert, we present Flower Jelly Printer: a printing device and design software for digitally fabricating flower jellies. Our design software lets users play with parameters and preview the resulting forms until achieving their desired shapes. We also developed slit injection printing that directly injects colored jelly into a base jelly, and shared several design examples to show the breadth of design possibilities. Finally, the user study with novice and experienced users demonstrates that our system benefits creators of all experience levels by iterative design and precise fabrication. We hope to enable more people to design and create their own flower jellies while expanding access and the design space for digitally fabricated foods.
3
Proxemics and Social Interactions in an Instrumented Virtual Reality Workshop
Julie R.. Williamson (University of Glasgow, Glasgow, United Kingdom)Jie Li (Centrum Wiskunde & Informatica, Amsterdam, Netherlands)David A.. Shamma (Centrum Wiskunde & Informatica, Amsterdam, Netherlands)Vinoba Vinayagamoorthy (BBC Research & Development, London, United Kingdom)Pablo Cesar (CWI, Amsterdam, Netherlands)
Virtual environments (VEs) can create collaborative and social spaces, which are increasingly important in the face of remote work and travel reduction. Recent advances, such as more open and widely available platforms, create new possibilities to observe and analyse interaction in VEs. Using a custom instrumented build of Mozilla Hubs to measure position and orientation, we conducted an academic workshop to facilitate a range of typical workshop activities. We analysed social interactions during a keynote, small group breakouts, and informal networking/hallway conversations. Our mixed-methods approach combined environment logging, observations, and semi-structured interviews. The results demonstrate how small and large spaces influenced group formation, shared attention, and personal space, where smaller rooms facilitated more cohesive groups while larger rooms made small group formation challenging but personal space more flexible. Beyond our findings, we show how the combination of data and insights can fuel collaborative spaces' design and deliver more effective virtual workshops.
3
SoniBand: Understanding the Effects of Metaphorical Movement Sonifications on Body Perception and Physical Activity
Judith Ley-Flores (Universidad Carlos III de Madrid, Leganes, Madrid, Spain)Laia Turmo Vidal (Uppsala University, Uppsala, Sweden)Nadia Berthouze (University College London, London, United Kingdom)Aneesha Singh (University College London, London, United Kingdom)Frederic Bevilacqua (STMS IRCAM-CNRS-Sorbonne Université, Paris, France)Ana Tajadura-Jiménez (Universidad Carlos III de Madrid / University College London, Madrid / London, Spain)
Negative body perceptions are a major predictor of physical inactivity, a serious health concern. Sensory feedback can be used to alter such body perception; movement sonification, in particular, has been suggested to affect body perception and levels of physical activity (PA) in inactive people. We investigated how metaphorical sounds impact body perception and PA. We report two qualitative studies centered on performing different strengthening/flexibility exercises using SoniBand, a wearable that augments movement through different sounds. The first study involved physically active participants and served to obtain a nuanced understanding of the sonifications’ impact. The second, in the home of physically inactive participants, served to identify which effects could support PA adherence. Our findings show that movement sonification based on metaphors led to changes in body perception (e.g., feeling strong) and PA (e.g., repetitions) in both populations, but effects could differ according to the existing PA-level. We discuss principles for metaphor-based sonification design to foster PA.
3
Data-Driven Mark Orientation for Trend Estimation in Scatterplots
Tingting Liu (School of Computer Science, Qingdao, Shandong, China)Xiaotong Li (School of Computer Science, Qingdao, Shandong, China)Chen Bao (Shandong University, Qingdao, Shandong, China)Michael Correll (Tableau Software, Seattle, Washington, United States)Changehe Tu (Shandong Univ., Qingdao, China)Oliver Deussen (University of Konstanz, Konstanz, Germany)Yunhai Wang (Shandong University, Qingdao, China)
A common task for scatterplots is communicating trends in bivariate data. However, the ability of people to visually estimate these trends is under-explored, especially when the data violate assumptions required for common statistical models, or visual trend estimates are in conflict with statistical ones. In such cases, designers may need to intervene and de-bias these estimations, or otherwise inform viewers about differences between statistical and visual trend estimations. We propose data-driven mark orientation as a solution in such cases, where the directionality of marks in the scatterplot guide participants when visual estimation is otherwise unclear or ambiguous. Through a set of laboratory studies, we investigate trend estimation across a variety of data distributions and mark directionalities, and find that data-driven mark orientation can help resolve ambiguities in visual trend estimates.
3
More Kawaii than a Real-Person Streamer: Understanding How the Otaku Community Engages with and Perceives Virtual YouTubers
Zhicong Lu (City University of Hong Kong, Hong Kong, China)Chenxinran Shen (University of Toronto, Toronto, Ontario, Canada)Jiannan Li (University of Toronto, Toronto, Ontario, Canada)Hong Shen (Carnegie Mellon University , Pittsburgh, Pennsylvania, United States)Daniel Wigdor (University of Toronto, Toronto, Ontario, Canada)
Live streaming has become increasingly popular, with most streamers presenting their real-life appearance. However, Virtual YouTubers (VTubers), virtual 2D or 3D avatars that are voiced by humans, are emerging as live streamers and attracting a growing viewership in East Asia. Although prior research has found that many viewers seek real-life interpersonal interactions with real-person streamers, it is currently unknown what makes VTuber live streams engaging or how they are perceived differently than real-person streamers. We conducted an interview study to understand how viewers engage with VTubers and perceive the identities of the voice actors behind the avatars (i.e., Nakanohito). The data revealed that Virtual avatars bring unique performative opportunities which result in different viewer expectations and interpretations of VTuber behavior. Viewers intentionally upheld the disembodiment of VTuber avatars from their voice actors. We uncover the nuances in viewer perceptions and attitudes and further discuss the implications of VTuber practices to the understanding of live streaming in general.
2
Understanding the Design Space of Embodied Passwords based on Muscle Memory
Rosa van Koningsbruggen (Bauhaus-Universität Weimar, Weimar, Germany)Bart Hengeveld (Eindhoven University of Technology, Eindhoven, Netherlands)Jason Alexander (University of Bath, Bath, United Kingdom)
Passwords have become a ubiquitous part of our everyday lives, needed for every web-service and system. However, it is challenging to create safe and diverse alphanumeric passwords, and to recall them, imposing a cognitive burden on the user. Through consecutive experiments, we explored the movement space, affordances and interaction, and memorability of a tangible, handheld, embodied password. In this context, we found that: (1) a movement space of 200 mm × 200 mm is preferred; (2) each context has a perceived level of safety, which—together with the affordances and link to familiarity—influences how the password is performed. Furthermore, the artefact’s dimensions should be balanced within the design itself, with the user, and the context, but there is a trade-off between the perceived safety and ergonomics; and (3) the designed embodied passwords can be recalled for at least a week, with participants creating unique passwords which were reproduced consistently.
2
From FOMO to JOMO: Examining the Fear and Joy of Missing Out and Presence in a 360° Video Viewing Experience
Tanja Aitamurto (University of Illinois at Chicago, Chicago, Illinois, United States)Andrea Stevenson Won (Cornell University, Ithaca, New York, United States)Sukolsak Sakshuwong (Stanford, Stanford, California, United States)Byungdoo Kim (Cornell University, Ithaca, New York, United States)Yasamin Sadeghi (University of California, Los Angeles , Los Angeles, California, United States)Krysten Stein (University of Illinois at Chicago, Chicago, Illinois, United States)Peter G. Royal (University of Illinois at Chicago, Chicago, Illinois, United States)Catherine Lynn. Kircos (Evidation Health, San Mateo, California, United States)
Cinematic Virtual Reality (CVR), or 360° video, engages users in immersive viewing experiences. However, as users watch one part of the 360° view, they will necessarily miss out on events happening in other parts of the sphere. Consequently, fear of missing out (FOMO) is unavoidable. However, users can also experience the joy of missing out (JOMO). In a repeated measures, mixed methods design, we examined the fear and joy of missing out (FOMO and JOMO) and sense of presence in two repeat viewings of a 360° film using a head-mounted display. We found that users experienced both FOMO and JOMO. FOMO was caused by the users' awareness of parallel events in the spherical view, but users also experienced JOMO. FOMO did not compromise viewers' sense of presence, and FOMO also decreased in the second viewing session, while JOMO remained constant. The findings suggest that FOMO and JOMO can be two integral qualities in an immersive video viewing experience and that FOMO may not be as negative a factor as previously thought.
2
Stereo-Smell via Electrical Trigeminal Stimulation
Jas Brooks (University of Chicago, Chicago, Illinois, United States)Shan-Yuan Teng (University of Chicago, Chicago, Illinois, United States)Jingxuan Wen (University of Chicago, Chicago, Illinois, United States)Romain Nith (University of Chicago, Chicago, Illinois, United States)Jun Nishida (University of Chicago, Chicago, Illinois, United States)Pedro Lopes (University of Chicago, Chicago, Illinois, United States)
We propose a novel type of olfactory device that creates a stereo-smell experience, i.e., directional information about the location of an odor, by rendering the readings of external odor sensors as trigeminal sensations using electrical stimulation of the user’s nasal septum. The key is that the sensations from the trigeminal nerve, which arise from nerve-endings in the nose, are perceptually fused with those of the olfactory bulb (the brain region that senses smells). As such, we propose that electrically stimulating the trigeminal nerve is an ideal candidate for stereo-smell augmentation/substitution that, unlike other approaches, does not require implanted electrodes in the olfactory bulb. To realize this, we engineered a self-contained device that users wear across their nasal septum. Our device outputs by stimulating the user’s trigeminal nerve using electrical impulses with variable pulse-widths; and it inputs by sensing the user’s inhalations using a photoreflector. It measures 10x23 mm and communicates with external gas sensors using Bluetooth. In our user study, we found the key electrical waveform parameters that enable users to feel an odor’s intensity (absolute electric charge) and direction (phase order and net charge). In our second study, we demonstrated that participants were able to localize a virtual smell source in the room by using our prototype without any previous training. Using these insights, our device enables expressive trigeminal sensations and could function as an assistive device for people with anosmia, who are unable to smell.
2
JetController: High-speed Ungrounded 3-DoF Force Feedback Controllers using Air Propulsion Jets
Yu-Wei Wang (National Taiwan University, Taipei, Taiwan)Yu-Hsin Lin (National Taiwan University, Taipei, Taiwan)Pin-Sung Ku (National Taiwan University, Taipei, Taiwan)Yōko Miyatake (Ochanomizu University, Tokyo, Japan)Yi-Hsuan Mao (National Taiwan University, Taipei, Taiwan)Po-Yu Chen (National Taiwan University, Taipei, Taiwan)Chun-Miao Tseng (National Taiwan University, Taipei, Taiwan)Mike Y.. Chen (National Taiwan University, Taipei, Taiwan)
JetController is a novel haptic technology capable of supporting high-speed and persistent 3-DoF ungrounded force feedback. It uses high-speed pneumatic solenoid valves to modulate compressed air to achieve 20-50Hz of full impulses at 4.0-1.0N, and combines multiple air propulsion jets to generate 3-DoF force feedback. Compared to propeller-based approaches, JetController supports 10-30 times faster impulse frequency, and its handheld device is significantly lighter and more compact. JetController supports a wide range of haptic events in games and VR experiences, from firing automatic weapons in games like Halo (15Hz) to slicing fruits in Fruit Ninja (up to 45Hz). To evaluate JetController, we integrated our prototype with two popular VR games, Half-life: Alyx and Beat Saber, to support a variety of 3D interactions. Study results showed that JetController significantly improved realism, enjoyment, and overall experience compared to commercial vibrating controllers, and was preferred by most participants.
2
GuideBand: Intuitive 3D Multilevel Force Guidance on a Wristband in Virtual Reality
Hsin-Ruey Tsai (National Chengchi University, Taipei, Taiwan)Yuan-Chia Chang (Kyoto University, Kyoto, Japan)Tzu-Yun Wei (National Taiwan University, Taipei, Taiwan)Chih-An Tsao (National Taiwan University, Taipei, Taiwan)Xander Koo (Pomona College, Claremont, California, United States)Hao-Chuan Wang (UC Davis, Davis, California, United States)Bing-Yu Chen (National Taiwan University, Taipei, Taiwan)
For haptic guidance, vibrotactile feedback is a commonly-used mechanism, but requires users to interpret its complicated patterns especially in 3D guidance, which is not intuitive and increases their mental effort. Furthermore, for haptic guidance in virtual reality (VR), not only guidance performance but also realism should be considered. Since vibrotactile feedback interferes with and reduces VR realism, it may not be proper for VR haptic guidance. Therefore, we propose a wearable device, GuideBand, to provide intuitive 3D multilevel force guidance upon the forearm, which reproduces an effect that the forearm is pulled and guided by a virtual guider or telepresent person in VR. GuideBand uses three motors to pull a wristband at different force levels in 3D space. Such feedback usually requires much larger and heavier robotic arms or exoskeletons. We conducted a just-noticeable difference study to understand users’ force level distinguishability. Based on the results, we performed a study to verify that compared with state-of-the-art vibrotactile guidance, GuideBand is more intuitive, needs a lower level of mental effort, and achieves similar guidance performance. We further conducted a VR experience study to observe how users combine and complement visual and force guidance, and prove that GuideBand enhances realism in VR guidance.
2
TiltChair: Manipulative Posture Guidance by Actively Inclining the Seat of an Office Chair
Kazuyuki Fujita (Tohoku University, Sendai, Miyagi, Japan)Aoi Suzuki (Research Institute of Electrical Communication, Tohoku University, Sendai, Japan)Kazuki Takashima (Tohoku University, Sendai, Japan)Kaori Ikematsu (Yahoo Japan Corporation, Tokyo, Japan)Yoshifumi Kitamura (Tohoku University, Sendai, Japan)
We propose TiltChair, an actuated office chair that physically manipulates the user's posture by actively inclining the chair's seat to address problems associated with prolonged sitting. The system controls the inclination angle and motion speed with the aim of achieving manipulative but unobtrusive posture guidance. To demonstrate its potential, we first built a prototype of TiltChair with a seat that could be tilted by pneumatic control. We then investigated the effects of the seat's inclination angle and motions on task performance and overall sitting experience through two experiments. The results show that the inclination angle mainly affects the difficulty of maintaining one's posture, while the motion speed affected the conspicuousness and subjective acceptability of the motion. However, these seating conditions did not affect objective task performance. Based on these results, we propose a design space for facilitating effective seat-inclination behavior using the three dimensions of angle, speed, and continuity. Furthermore, we discuss promising applications.
2
Phonetroller: Visual Representations of Fingers for Precise Touch Input when using a Phone in VR
Fabrice Matulic (Preferred Networks Inc., Tokyo, Japan)Aditya Ganeshan (Preferred Networks Inc., Tokyo, Japan)Hiroshi Fujiwara (Preferred Networks Inc., Tokyo, Japan)Daniel Vogel (University of Waterloo, Waterloo, Ontario, Canada)
Smartphone touch screens are potentially attractive for interaction in virtual reality (VR). However, the user cannot see the phone or their hands in a fully immersive VR setting, impeding their ability for precise touch input. We propose mounting a mirror above the phone screen such that the front-facing camera captures the thumbs on or near the screen. This enables the creation of semi-transparent overlays of thumb shadows and inference of fingertip hover points with deep learning, which help the user aim for targets on the phone. A study compares the effect of visual feedback on touch precision in a controlled task and qualitatively evaluates three example applications demonstrating the potential of the technique. The results show that the enabled style of feedback is effective for thumb-size targets, and that the VR experience can be enriched by using smartphones as VR controllers supporting precise touch input.
2
Increasing Electrical Muscle Stimulation’s Dexterity by means of Back of the Hand Actuation
Akifumi Takahashi (University of Chicago, Chicago, Illinois, United States)Jas Brooks (University of Chicago, Chicago, Illinois, United States)Hiroyuki Kajimoto (The University of Electro-Communications, Chofu, Tokyo, Japan)Pedro Lopes (University of Chicago, Chicago, Illinois, United States)
We propose a technique that allows an unprecedented level of dexterity in electrical muscle stimulation (EMS), i.e., it allows interactive EMS-based devices to flex the user’s fingers independently of each other. EMS is a promising technique for force feedback because of its small form factor when compared to mechanical actuators. However, the current EMS approach to flexing the user’s fingers (i.e., attaching electrodes to the base of the forearm, where finger muscles anchor) is limited by its inability to flex a target finger’s metacarpophalangeal (MCP) joint independently of the other fingers. In other words, current EMS devices cannot flex one finger alone, they always induce unwanted actuation to adjacent fingers. To tackle the lack of dexterity, we propose and validate a new electrode layout that places the electrodes on the back of the hand, where they stimulate the interossei/lumbricals muscles in the palm, which have never received attention with regards to EMS. In our user study, we found that our technique offers four key benefits when compared to existing EMS electrode layouts: our technique (1) flexes all four fingers around the MCP joint more independently; (2) has less unwanted flexion of other joints (such as the proximal interphalangeal joint); (3) is more robust to wrist rotations; and (4) reduces calibration time. Therefore, our EMS technique enables applications for interactive EMS systems that require a level of flexion dexterity not available until now. We demonstrate the improved dexterity with four example applications: three musical instrumental tutorials (piano, drum, and guitar) and a VR application that renders force feedback in individual fingers while manipulating a yo-yo.
2
Large Scale Analysis of Multitasking Behavior During Remote Meetings
Hancheng Cao (Stanford University, Stanford, California, United States)Chia-Jung Lee (Amazon, Seattle, Washington, United States)Shamsi Iqbal (Microsoft Research, Redmond, Washington, United States)Mary Czerwinski (Microsoft Research, Redmond, Washington, United States)Priscilla N Y. Wong (UCL Interaction Centre, London, United Kingdom)Sean Rintel (Microsoft Research, Cambridge, United Kingdom)Brent Hecht (Microsoft, Redmond, Washington, United States)Jaime Teevan (Microsoft, Redmond, Washington, United States)Longqi Yang (Microsoft, Redmond, Washington, United States)
Virtual meetings are critical for remote work because of the need for synchronous collaboration in the absence of in-person interactions. In-meeting multitasking is closely linked to people's productivity and wellbeing. However, we currently have limited understanding of multitasking in remote meetings and its potential impact. In this paper, we present what we believe is the most comprehensive study of remote meeting multitasking behavior through an analysis of a large-scale telemetry dataset collected from February to May 2020 of U.S. Microsoft employees and a 715-person diary study. Our results demonstrate that intrinsic meeting characteristics such as size, length, time, and type, significantly correlate with the extent to which people multitask, and multitasking can lead to both positive and negative outcomes. Our findings suggest important best-practice guidelines for remote meetings (e.g., avoid important meetings in the morning) and design implications for productivity tools (e.g., support positive remote multitasking).
2
Dynamic Field of View Restriction in 360º Video: Aligning Optical Flow and Visual SLAM to Mitigate VIMS
Paulo Bala (Universidade Nova de Lisboa, Lisbon, Portugal)Ian Oakley (UNIST, Ulsan, Korea, Republic of)Valentina Nisi (Instituto Superior Técnico - Universidade de Lisboa, Lisboa, Portugal)Nuno Jardim. Nunes (Instituto Superior Técnico - U. Lisbon, Lisboa - Madeira, Portugal)
Head-Mounted Display based Virtual Reality is proliferating. However, Visually Induced Motion Sickness (VIMS), which prevents many from using VR without discomfort, bars widespread adoption. Prior work has shown that limiting the Field of View (FoV) can reduce VIMS at a cost of also reducing presence. Systems that dynamically adjust a user's FoV may be able to balance these concerns. To explore this idea, we present a technique for standard 360º video that shrinks FoVs only during VIMS inducing scenes. It uses Visual Simultaneous Localization and Mapping and peripheral optical flow to compute camera movements and reduces FoV during rapid motion or optical flow. A user study (N=23) comparing 360º video with unrestricted-FoVs (90º), reduced fixed-FoVs (40º) and dynamic-FoVs (40º-90º) revealed that dynamic-FoVs mitigate VIMS while maintaining presence. We close by discussing the user experience of dynamic-FoVs and recommendations for how they can help make VR comfortable and immersive for all.
2
Visuo-haptic Illusions for Linear Translation and Stretching using Physical Proxies in Virtual Reality
Martin Feick (Saarland Informatics Campus, Saarbrücken, Germany)Niko Kleer (Saarland Informatics Campus, Saarbrücken, Germany)André Zenner (Saarland University, Saarland Informatics Campus, Saarbrücken, Germany)Anthony Tang (University of Toronto, Toronto, Ontario, Canada)Antonio Krüger (DFKI, Saarland Informatics Campus, Saarbrücken, Germany)
Providing haptic feedback when manipulating virtual objects is an essential part of immersive virtual reality experiences; however, it is challenging to replicate all of an object’s properties and characteristics. We propose the use of visuo-haptic illusions alongside physical proxies to enhance the scope of proxy-based interactions with virtual objects. In this work, we focus on two manipulation techniques, linear translation and stretching across different distances, and investigate how much discrepancy between the physical proxy and the virtual object may be introduced without participants noticing. In a study with 24 participants, we found that manipulation technique and travel distance significantly affect the detection thresholds, and that visuo-haptic illusions impact performance and accuracy. We show that this technique can be used to enable functional proxy objects that act as stand-ins for multiple virtual objects, illustrating the technique through a showcase VR-DJ application.
2
Oh, Snap! A Fabrication Pipeline to Magnetically Connect Conventional and 3D-Printed Electronics
Martin Schmitz (Technical University of Darmstadt, Darmstadt, Germany)Jan Riemann (Technical University of Darmstadt, Darmstadt, Germany)Florian Müller (TU Darmstadt, Darmstadt, Germany)Steffen Kreis (TU Darmstadt, Darmstadt, Germany)Max Mühlhäuser (TU Darmstadt, Darmstadt, Germany)
3D printing has revolutionized rapid prototyping by speeding up the creation of custom-shaped objects. With the rise of multi-material 3Dprinters, these custom-shaped objects can now be made interactive in a single pass through passive conductive structures. However, connecting conventional electronics to these conductive structures often still requires time-consuming manual assembly involving many wires, soldering or gluing. To alleviate these shortcomings, we propose Oh, Snap!: a fabrication pipeline and interfacing concept to magnetically connect a 3D-printed object equipped with passive sensing structures to conventional sensing electronics. To this end, Oh, Snap! utilizes ferromagnetic and conductive 3D-printed structures, printable in a single pass on standard printers. We further present a proof-of-concept capacitive sensing board that enables easy and robust magnetic assembly to quickly create interactive 3D-printed objects. We evaluate Oh, Snap! by assessing the robustness and quality of the connection and demonstrate its broad applicability by a series of example applications.
2
Augmenting Scientific Papers with Just-in-Time, Position-Sensitive Definitions of Terms and Symbols
Andrew Head (UC Berkeley, Berkeley, California, United States)Kyle Lo (Allen Institute for Artificial Intelligence, Seattle, Washington, United States)Dongyeop Kang (UC Berkeley, Berkeley, California, United States)Raymond Fok (University of Washington, Seattle, Washington, United States)Sam Skjonsberg (Allen Institute for AI, Seattle, Washington, United States)Daniel Weld (University of Washington, Seattle, Washington, United States)Marti Hearst (UC Berkeley, Berkeley, California, United States)
Despite the central importance of research papers to scientific progress, they can be difficult to read. Comprehension is often stymied when the information needed to understand a passage resides somewhere else—in another section, or in another paper. In this work, we envision how interfaces can bring definitions of technical terms and symbols to readers when and where they need them most. We introduce ScholarPhi, an augmented reading interface with four novel features: (1) tooltips that surface position-sensitive definitions from elsewhere in a paper, (2) a filter over the paper that “declutters” it to reveal how the term or symbol is used across the paper, (3) automatic equation diagrams that expose multiple definitions in parallel, and (4) an automatically generated glossary of important terms and symbols. A usability study showed that the tool helps researchers of all experience levels read papers. Furthermore, researchers were eager to have ScholarPhi’s definitions available to support their everyday reading.
2
GamesBond: Bimanual Haptic Illusion of Physically Connected Objects for Immersive VR Using Grip Deformation
Neung Ryu (KAIST, Daejeon, Korea, Republic of)Hye-Young Jo (KAIST, Daejeon, Korea, Republic of)Michel Pahud (Microsoft Research, Redmond, Washington, United States)Mike Sinclair (Microsoft, Redmond, Washington, United States)Andrea Bianchi (KAIST, Daejeon, Korea, Republic of)
Virtual Reality experiences, such as games and simulations, typically support the usage of bimanual controllers to interact with virtual objects. To recreate the haptic sensation of holding objects of various shapes and behaviors with both hands, previous researchers have used mechanical linkages between the controllers that render adjustable stiffness. However, the linkage cannot quickly adapt to simulate dynamic objects, nor it can be removed to support free movements. This paper introduces GamesBond, a pair of 4-DoF controllers without physical linkage but capable to create the illusion of being connected as a single device, forming a virtual bond. The two controllers work together by dynamically displaying and physically rendering deformations of hand grips, and so allowing users to perceive a single connected object between the hands, such as a jumping rope. With a user study and various applications we show that GamesBond increases the realism, immersion, and enjoyment of bimanual interaction.
2
TexYZ: Embroidering Enameled Wires for Three Degree-of-Freedom Mutual Capacitive Sensing
Roland Aigner (University of Applied Sciences Upper Austria, Hagenberg, Austria)Andreas Pointner (University of Applied Sciences Upper Austria, Hagenberg, Austria)Thomas Preindl (University of Applied Sciences Upper Austria, Hagenberg, Austria)Rainer Danner (University of Applied Sciences Upper Austria, Hagenberg, Austria)Michael Haller (University of Applied Sciences Upper Austria, Hagenberg, Austria)
In this paper, we present TexYZ, a method for rapid and effortless manufacturing of textile mutual capacitive sensors using a commodity embroidery machine. We use enameled wire as a bobbin thread to yield textile capacitors with high quality and consistency. As a consequence, we are able to leverage the precision and expressiveness of projected mutual capacitance for textile electronics, even when size is limited. Harnessing the assets of machine embroidery, we implement and analyze five distinct electrode patterns, examine the resulting electrical features with respect to geometrical attributes, and demonstrate the feasibility of two promising candidates for small-scale matrix layouts. The resulting sensor patches are further evaluated in terms of capacitance homogeneity, signal-to-noise ratio, sensing range, and washability. Finally, we demonstrate two use case scenarios, primarily focusing on continuous input with up to three degrees-of-freedom.
2
Preserving Agency During Electrical Muscle Stimulation Training Speeds up Reaction Time Directly After Removing EMS
Shunichi Kasahara (Sony CSL, Tokyo, Japan)Kazuma Takada (Meiji University, Tokyo, Japan)Jun Nishida (University of Chicago, Chicago, Illinois, United States)Kazuhisa Shibata (RIKEN CBS, Wako, Saitama, Japan)Shinsuke Shimojo (California Institute of Technology, California, California, United States)Pedro Lopes (University of Chicago, Chicago, Illinois, United States)
Force feedback devices, such as motor-based exoskeletons or wearables based on electrical muscle stimulation (EMS), have the unique potential to accelerate users’ own reaction time (RT). However, this speedup has only been explored while the device is attached to the user. In fact, very little is known regarding whether this faster reaction time still occurs after the user removes the device from their bodies–this is precisely what we investigated by means of a simple reaction time (RT) experiment, in which participants were asked to tap as soon as they saw an LED flashing. Participants experienced this in three EMS conditions: (1) fast-EMS, the electrical impulses were synced with the LED; (2) agency-EMS, the electrical impulse was delivered 40ms faster than the participant’s own RT, which prior work has shown to preserve one’s sense of agency over this movement; and, (3) late-EMS: the impulse was delivered after the participant’s own RT. Our results revealed that the participants’ RT was significantly reduced by approximately 8ms(up to 20ms) only after training with the agency-EMS condition. This finding suggests that the prioritizing agency during EMS training is key to motor-adaptation, i.e., it enables a faster motor response even after the user has removed the EMS device from their body.
2
IdeaBot: Investigating Social Facilitation in Human-Machine Team Creativity
Angel Hsing-Chi Hwang (Cornell University, Ithaca, New York, United States)Andrea Stevenson Won (Cornell University, Ithaca, New York, United States)
The present study investigates how human subjects collaborate with a computer-mediated chatbot in creative idea generation tasks. In three text-based between-group studies, we tested whether the perceived identity (i.e.,whether the bot is perceived as a machine or as a human) or the conversational style of a teammate would moderate the outcomes of participants’ creative production. In Study 1, participants worked with either a chatbot or a human confederate. In Study 2, all participants worked with a human teammate but were informed that their partner was either a human or a chatbot. Conversely, all participants worked with a chatbot in Study 3, but were told the identity of their partner was either a chatbot or a human. We investigated differences in idea generation outcomes and found that participants consistently contributed more ideas and with ideas of higher quality when they perceived their teamworking partner as a bot. Furthermore, when the conversational style of the partner was robotic, participants with high anxiety in group communication reported greater creative self-efficacy in task performance. Finally, whether the perceived dominance of a partner and the pressure to come up with ideas during the task mediated positive outcomes of idea generation also depends on whether the conversational style of the bot partner was robot- or human-like. Based on our findings, we discussed implications for future design of artificial agents as active team players in collaboration tasks.
2
Remote and Collaborative Virtual Reality Experiments via Social VR Platforms
David Saffo (Northeastern University, Boston, Massachusetts, United States)Sara Di Bartolomeo (Northeastern University, Boston, Massachusetts, United States)Caglar Yildirim (Northeastern University, Boston, Massachusetts, United States)Cody Dunne (Northeastern University, Boston, Massachusetts, United States)
Virtual reality (VR) researchers struggle to conduct remote studies. Previous work has focused on working around limitations imposed by traditional crowdsourcing methods. However, the potential for leveraging social VR platforms for HCI evaluations is largely unexplored. These platforms have large VR-ready user populations, distributed synchronous virtual environments, and support for user-generated content. We demonstrate how social VR platforms can be used to practically and ethically produce valid research results by replicating two studies using one such platform (VRChat): a quantitative study on Fitts’ law and a qualitative study on tabletop collaboration. Our replication studies exhibited analogous results to the originals, indicating the research validity of this approach. Moreover, we easily recruited experienced VR users with their own hardware for synchronous, remote, and collaborative participation. We further provide lessons learned for future researchers experimenting using social VR platforms. This paper and all supplemental materials are available at osf.io/c2amz.
2
The Role of Social Presence for Cooperation in Augmented Reality on Head Mounted Devices
Niklas Osmers (Clausthal University of Technology, Clausthal-Zellerfeld, Germany)Michael Prilla (Clausthal University of Technology, Clausthal-Zellerfeld, Germany)Oliver Blunk (Clausthal University of Technology, Clausthal-Zellerfeld, Germany)Gordon George. Brown (Clausthal University of Technology, Clausthal-Zellerfeld, Germany)Marc Janßen (Clausthal University of Technology, Clausthal-Zellerfeld, Germany)Nicolas Kahrl (Clausthal University of Technology, Clausthal-Zellerfeld, Germany)
With growing interest regarding cooperation support using Augmented Reality (AR), social presence has become a popular measure of its quality. While this concept is established throughout cooperation research, its role in AR is still unclear: Some work uses social presence as an indicator for support quality, while others found no impact at all. To clarify this role, we conducted a literature review of recent publications that empirically investigated social presence in cooperative AR. After a thorough selection procedure, we analyzed 19 publications according to factors influencing social presence and the impact of social presence on cooperation support. We found that certain interventions support social presence better than others, that social presence has an influence on user’s preferences and that the relation between social presence and cooperation quality may depend on the symmetry of the cooperation task. This contributes to existing research by clarifying the role of social presence for cooperative AR and deriving corresponding design recommendations.
2
Interaction Illustration Taxonomy: Classification of Styles and Techniques for Visually Representing Interaction Scenarios
Axel Antoine (Univ. Lille, CNRS, Inria, Centrale Lille, UMR 9189 CRIStAL, Lille, France)Sylvain Malacria (Univ. Lille, CNRS, Inria, Centrale Lille, UMR 9189 CRIStAL, Lille, France)Nicolai Marquardt (University College London, London, United Kingdom)Géry Casiez (Univ. Lille, CNRS, Inria, Centrale Lille, UMR 9189 CRIStAL, Lille, France)
Static illustrations are ubiquitous means to represent interaction scenarios. Across papers and reports, these visuals demonstrate people's use of devices, explain systems, or show design spaces. Creating such figures is challenging, and very little is known about the overarching strategies for visually representing interaction scenarios. To mitigate this task, we contribute a unified taxonomy of design elements that compose such figures. In particular, we provide a detailed classification of Structural and Interaction strategies, such as composition, visual techniques, dynamics, representation of users, and many others -- all in context of the type of scenarios. This taxonomy can inform researchers' choices when creating new figures, by providing a concise synthesis of visual strategies, and revealing approaches they were not aware of before. Furthermore, to support the community for creating further taxonomies, we also provide three open-source software facilitating the coding process and visual exploration of the coding scheme.
2
Elbow-Anchored Interaction: Designing Restful Mid-Air Input
Rafael Veras (Huawei, Markham, Ontario, Canada)Gaganpreet Singh (Huawei, Markham, Ontario, Canada)Farzin Farhadi-Niaki (Huawei, Markham, Ontario, Canada)Ritesh Udhani (University of Manitoba, Winnipeg, Manitoba, Canada)Parth Pradeep. Patekar (University of Manitoba, Winnipeg, Manitoba, Canada)Wei Zhou (Huawei Technologies, Markham, Ontario, Canada)Pourang Irani (University of Manitoba, Winnipeg, Manitoba, Canada)Wei Li (Huawei Canada, Markham, Ontario, Canada)
We designed a mid-air input space for restful interactions on the couch. We observed people gesturing in various postures on a couch and found that posture affects the choice of arm motions when no constraints are imposed by a system. Study participants that sat with the arm rested were more likely to use the forearm and wrist, as opposed to the whole arm. We investigate how a spherical input space, where forearm angles are mapped to screen coordinates, can facilitate restful mid-air input in multiple postures. We present two controlled studies. In the first, we examine how a spherical space compares with a planar space in an elbow-anchored setup, with a shoulder-level input space as baseline. In the second, we examine the performance of a spherical input space in four common couch postures that set unique constraints to the arm. We observe that a spherical model that captures forearm movement facilitates comfortable input across different seated postures.
2
“Grip-that-there”: An Investigation of Explicit and Implicit Task Allocation Techniques for Human-Robot Collaboration
Karthik Mahadevan (University of Toronto, Toronto, Ontario, Canada)Mauricio Sousa (University of Toronto, Toronto, Ontario, Canada)Anthony Tang (University of Toronto, Toronto, Ontario, Canada)Tovi Grossman (University of Toronto, Toronto, Ontario, Canada)
In ad-hoc human-robot collaboration (HRC), humans and robots work on a task without pre-planning the robot's actions prior to execution; instead, task allocation occurs in real-time. However, prior research has largely focused on task allocations that are pre-planned - there has not been a comprehensive exploration or evaluation of techniques where task allocation is adjusted in real-time. Inspired by HCI research on territoriality and proxemics, we propose a design space of novel task allocation techniques including both explicit techniques, where the user maintains agency, and implicit techniques, where the efficiency of automation can be leveraged. The techniques were implemented and evaluated using a tabletop HRC simulation in VR. A 16-participant study, which presented variations of a collaborative block stacking task, showed that implicit techniques enable efficient task completion and task parallelization, and should be augmented with explicit mechanisms to provide users with fine-grained control.
2
HairTouch: Providing Stiffness, Roughness and Surface Height Differences Using Reconfigurable Brush Hairs on a VR Controller
Chi-Jung Lee (National Taiwan University, Taipei, Taiwan)Hsin-Ruey Tsai (National Chengchi University, Taipei, Taiwan)Bing-Yu Chen (National Taiwan University, Taipei, Taiwan)
Tactile feedback is widely used to enhance realism in virtual reality (VR). When touching virtual objects, stiffness and roughness are common and obvious factors perceived by the users. Furthermore, when touching a surface with complicated surface structure, differences from not only stiffness and roughness but also surface height are crucial. To integrate these factors, we propose a pin-based handheld device, HairTouch, to provide stiffness differences, roughness differences, surface height differences and their combinations. HairTouch consists of two pins for the two finger segments close to the index fingertip, respectively. By controlling brush hairs' length and bending direction to change the hairs' elasticity and hair tip direction, each pin renders various stiffness and roughness, respectively. By further independently controlling the hairs' configuration and pins' height, versatile stiffness, roughness and surface height differences are achieved. We conducted a perception study to realize users' distinguishability of stiffness and roughness on each of the segments. Based on the results, we performed a VR experience study to verify that the tactile feedback from HairTouch enhances VR realism.
2
User Authentication via Electrical Muscle Stimulation
Yuxin Chen (University of Chicago, Chicago, Illinois, United States)Zhuolin Yang (University of Chicago, Chicago, Illinois, United States)Ruben Abbou (University of Chicago, Chicago, Illinois, United States)Pedro Lopes (University of Chicago, Chicago, Illinois, United States)Ben Y.. Zhao (University of Chicago, Chicago, Illinois, United States)Haitao Zheng (University of Chicago, Chicago, Illinois, United States)
We propose a novel modality for active biometric authentication: electrical muscle stimulation (EMS). To explore this, we engineered an interactive system, which we call ElectricAuth, that stimulates the user’s forearm muscles with a sequence of electrical impulses (i.e., EMS challenge) and measures the user’s involuntary finger movements (i.e., response to the challenge). ElectricAuth leverages EMS’s intersubject variability, where the same electrical stimulation results in different movements in different users because everybody’s physiology is unique (e.g., differences in bone and muscular structure, skin resistance and composition, etc.). As such, ElectricAuth allows users to login without memorizing passwords or PINs. ElectricAuth’s challenge-response structure makes it secure against data breaches and replay attacks, a major vulnerability facing today’s biometrics such as facial recognition and fingerprints. Furthermore, ElectricAuth never reuses the same challenge twice in authentications – in just one second of stimulation it encodes one of 68M possible challenges. In our user studies, we found that ElectricAuth resists: (1) impersonation attacks (false acceptance rate: 0.17% at 5% false rejection rate); (2) replay attacks (false acceptance rate: 0.00% at 5% false rejection rate); and, (3) synthesis attacks (false acceptance rates: 0.2-2.5%). Our longitudinal study also shows that ElectricAuth produces consistent results over time and across different humidity and muscle conditions.
2
MagnetIO: Passive yet Interactive Soft Haptic Patches Anywhere
Alex Mazursky (University of Chicago, Chicago, Illinois, United States)Shan-Yuan Teng (University of Chicago, Chicago, Illinois, United States)Romain Nith (University of Chicago, Chicago, Illinois, United States)Pedro Lopes (University of Chicago, Chicago, Illinois, United States)
We propose a new type of haptic actuator, which we call MagnetIO, that is comprised of two parts: one battery-powered voice-coil worn on the user’s fingernail and any number of interactive soft patches that can be attached onto any surface (everyday objects, user’s body, appliances, etc.). When the user’s finger wearing our voice-coil contacts any of the interactive patches it detects its magnetic signature via magnetometer and vibrates the patch, adding haptic feedback to otherwise input-only interactions. To allow these passive patches to vibrate, we make them from silicone with regions doped with polarized neodymium powder, resulting in soft and stretchable magnets. This stretchable form-factor allows them to be wrapped to the user’s body or everyday objects of various shapes. We demonstrate how these add haptic output to many situations, such as adding haptic buttons to the walls of one’s home. In our technical evaluation, we demonstrate that our interactive patches can be excited across a wide range of frequencies (0-500 Hz) and can be tuned to resonate at specific frequencies based on the patch’s geometry. Furthermore, we demonstrate that MagnetIO’s vibration intensity is as powerful as a typical linear resonant actuator (LRA); yet, unlike these rigid actuators, our passive patches operate as springs with multiple modes of vibration, which enables a wider band around its resonant frequency than an equivalent LRA.
2
Assessing Social Anxiety Through Digital Biomarkers Embedded in a Gaming Task
Martin Johannes. Dechant (University of Saskatchewan, Saskatoon, Saskatchewan, Canada)Julian Frommel (University of Saskatchewan, Saskatoon, Saskatchewan, Canada)Regan L. Mandryk (University of Saskatchewan, Saskatoon, Saskatchewan, Canada)
Digital biomarkers of mental health issues offer many advantages, including timely identification for early intervention, ongoing assessment during treatment, and reducing barriers to assessment stemming from geography, age, fear, or disparities in access to systems of care. Embedding digital biomarkers into games may further increase the reach of digital assessment. In this study, we explore game-based digital biomarkers for social anxiety, based on interaction with a non-player character (NPC). We show that social anxiety affects a player’s accuracy and their movement path in a gaming task involving an NPC. Further, we compared first versus third-person camera perspectives and the use of customized versus predefined avatars to explore the influence of common game interface factors on the expression of social anxiety through in-game movements. Our findings provide new insights about how game-based digital biomarkers can be effectively used for social anxiety, affording the benefits of early and ongoing digital assessment.
2
Sketchnote Components, Design Space Dimensions, and Strategies for Effective Visual Note Taking
Rebecca Zheng (University College London, London, United Kingdom)Marina Fernández Camporro (University College London, London, United Kingdom)Hugo Romat (ETH, Zurich, Switzerland)Nathalie Henry Riche (Microsoft Research, Redmond, Washington, United States)Benjamin Bach (Edinburgh University, Edinburgh, United Kingdom)Fanny Chevalier (University of Toronto, Toronto, Ontario, Canada)Ken Hinckley (Microsoft Research, Redmond, Washington, United States)Nicolai Marquardt (University College London, London, United Kingdom)
Sketchnoting is a form of visual note taking where people listen to, synthesize, and visualize ideas from a talk or other event using a combination of pictures, diagrams, and text. Little is known about the design space of this kind of visual note taking. With an eye towards informing the implementation of digital equivalents of sketchnoting, inking, and note taking, we introduce a classification of sketchnote styles and techniques, with a qualitative analysis of 103 sketchnotes, and situated in context with six semi-structured follow up interviews. Our findings distill core sketchnote components (content, layout, structuring elements, and visual styling) and dimensions of the sketchnote design space, classifying levels of conciseness, illustration, structure, personification, cohesion, and craftsmanship. We unpack strategies to address particular note taking challenges, for example dealing with constraints of live drawings, and discuss relevance for future digital inking tools, such as recomposition, styling, and design suggestions.
2
Towards “Avatar-Friendly” 3D Manipulation Techniques: Bridging the Gap Between Sense of Embodiment and Interaction in Virtual Reality
Diane Dewez (Inria, Rennes, France)Ludovic Hoyet (Inria, Rennes, France)Anatole Lécuyer (Inria, Rennes, France)Ferran Argelaguet Sanz (Inria, Rennes, France)
Avatars, the users' virtual representations, are becoming ubiquitous in virtual reality applications. In this context, the avatar becomes the medium which enables users to manipulate objects in the virtual environment. It also becomes the users' main spatial reference, which can not only alter their interaction with the virtual environment, but also the perception of themselves. In this paper, we review and analyse the current state-of-the-art for 3D object manipulation and the sense of embodiment. Our analysis is twofold. First, we discuss the impact that the avatar can have on object manipulation. Second, we discuss how the different components of a manipulation technique (i.e. input, control and feedback) can influence the user’s sense of embodiment. Throughout the analysis, we crystallise our discussion with practical guidelines for VR application designers and we propose several research topics towards ``avatar-friendly’’ manipulation techniques.
2
LightTouch Gadgets: Extending Interactions on Capacitive Touchscreens by Converting Light Emission to Touch Inputs
Kaori Ikematsu (Yahoo Japan Corporation, Tokyo, Japan)Kunihiro Kato (Tokyo University of Technology, Tokyo, Japan)Yoshihiro Kawahara (The University of Tokyo, Tokyo, Japan)
We present LightTouch, a 3D-printed passive gadget to enhance touch interactions on unmodified capacitive touchscreens. The LightTouch gadgets simulate finger operations such as tapping, swiping, and multi-touch gestures by means of conductive materials and light-dependent resistors (LDR) embedded in the object. The touchscreen emits visible light and the LDR senses the level of this light, which changes its resistance value. By controlling the screen brightness, it intentionally connects or disconnects the path between the GND and the touchscreen, thus allowing the touch inputs to be controlled. In contrast to conventional physical extensions for touchscreens, our technique requires neither continuous finger contact on the conductive part nor the use of batteries. As such, it opens up new possibilities for touchscreen interactions beyond the simple automation of touch inputs, such as establishing a communication channel between devices, enhancing the trackability of tangibles, and inter-application operations.
1
Gesture Knitter: A Hand Gesture Design Tool for Head-Mounted Mixed Reality Applications
George B. Mo (University of Cambridge, Cambridge, United Kingdom)John J. Dudley (University of Cambridge, Cambridge, United Kingdom)Per Ola Kristensson (University of Cambridge, Cambridge, United Kingdom)
Hand gestures are a natural and expressive input method enabled by modern mixed reality headsets. However, it remains challenging for developers to create custom gestures for their applications. Conventional strategies to bespoke gesture recognition involve either hand-crafting or data-intensive deep-learning. Neither approach is well suited for rapid prototyping of new interactions. This paper introduces a flexible and efficient alternative approach for constructing hand gestures. We present Gesture Knitter: a design tool for creating custom gesture recognizers with minimal training data. Gesture Knitter allows the specification of gesture primitives that can then be combined to create more complex gestures using a visual declarative script. Designers can build custom recognizers by declaring them from scratch or by providing a demonstration that is automatically decoded into its primitive components. Our developer study shows that Gesture Knitter achieves high recognition accuracy despite minimal training data and delivers an expressive and creative design experience.
1
Living Memory Home: Understanding Continuing Bond in the Digital Age through Backstage Grieving
Wan-Jou She (Weill Cornell Medicine, New York, New York, United States)Panote Siriaraya (Kyoto Institute of Technology, Kyoto, Japan)Chee Siang Ang (University of Kent, Canterbury, KENT, United Kingdom)Holly Gwen. Prigerson (Weill Cornell Medicine, New York, New York, United States)
Prolong Grief Disorder (PGD) is a condition in which mourners are stuck in the grief process for a prolonged period and continue to suffer from an intense, mal-adaptive level of grief. Despite the increased popularity of virtual mourning practices, and subsequently the emergence of HCI research in this area, there is little research looking into how continuing bonds maintained digitally promote or impede bereavement adjustment. Through a one-month diary study and in-depth interviews with 17 participants who recently lost their loved ones, we identified four broad mechanisms of how grievers engage in what we called "backstage" grieving (as opposed to bereavement through digital public space like social media). We further discuss how this personal and private grieving is important in maintaining emotional well-being hence avoiding developing PGD, as well as possible design opportunities and challenges for future digital tools to support grieving.
1
Interaction Pace and User Preferences
Alix Goguey (Université Grenoble Alpes, Grenoble, France)Carl Gutwin (University of Saskatchewan, Saskatoon, Saskatchewan, Canada)Zhe Chen (University of Canterbury, Christchurch, New Zealand)Pang Suwanaposee (University of Canterbury, Christchurch, New Zealand)Andy Cockburn (University of Canterbury, Christchurch, New Zealand)
The overall pace of interaction combines the user's pace and the system's pace, and a pace mismatch could impair user preferences (e.g., animations or timeouts that are too fast or slow for the user). Motivated by studies of speech rate convergence, we conducted an experiment to examine whether user preferences for system pace are correlated with user pace. Subjects first completed a series of trials to determine their user pace. They then completed a series of hierarchical drag-and-drop trials in which folders automatically expanded when the cursor hovered for longer than a controlled timeout. Results showed that preferences for timeout values correlated with user pace -- slow-paced users preferred long timeouts, and fast-paced users preferred short timeouts. Results indicate potential benefits in moving away from fixed or customisable settings for system pace. Instead, systems could improve preferences by automatically adapting their pace to converge towards that of the user.
1
A Critical Assessment of the Use of SSQ as a Measure of General Discomfort in VR Head-Mounted Displays
Teresa Hirzle (Ulm University, Ulm, Germany)Maurice Cordts (Ulm University, Ulm, Germany)Enrico Rukzio (University of Ulm, Ulm, Germany)Jan Gugenheimer (Institut Polytechnique de Paris, Paris, France)Andreas Bulling (University of Stuttgart, Stuttgart, Germany)
Based on a systematic literature review of more than 300 papers published over the last 10 years, we provide indicators that the simulator sickness questionnaire (SSQ) is extensively used and widely accepted as a general discomfort measure in virtual reality (VR) research – although it actually only accounts for one category of symptoms. This results in important other categories (digital eye strain (DES) and ergonomics) being largely neglected. To contribute to a more comprehensive picture of discomfort in VR head-mounted displays, we further conducted an online study (N=352) on the severity and relevance of all three symptom categories. Most importantly, our results reveal that symptoms of simulator sickness are significantly less severe and of lower prevalence than those of DES and ergonomics. In light of these findings, we critically discuss the current use of SSQ as the only discomfort measure and propose a more comprehensive factor model that also includes DES and ergonomics.
1
HapticSeer: A Multi-channel, Black-box, Platform-agnostic Approach to Detecting Video Game Events for Real-time Haptic Feedback
Yu-Hsin Lin (National Taiwan University, Taipei City, Taiwan)Yu-Wei Wang (National Taiwan University, Taipei City, Taiwan)Pin-Sung Ku (National Taiwan University, Taipei City, Taiwan)Yun-Ting Cheng (National Taiwan University, Taipei City, Taiwan)Yuan-Chih Hsu (National Taiwan University, Taipei City, Taiwan)Ching-Yi Tsai (National Taiwan University, Taipei City, Taiwan)Mike Y.. Chen (National Taiwan University, Taipei City, Taiwan)
Haptic feedback significantly enhances virtual experiences. However, supporting haptics currently requires modifying the codebase, making it impractical to add haptics to popular, high-quality experiences such as best selling games, which are typically closed-source. We present HapticSeer, a multi-channel, black-box, platform-agnostic approach to detecting game events for real-time haptic feedback. The approach is based on two key insights: 1) all games have 3 types of data streams: video, audio, and controller I/O, that can be analyzed in real-time to detect game events, and 2) a small number of user interface design patterns are reused across most games, so that event detectors can be reused effectively. We developed an open-source HapticSeer framework and implemented several real-time event detectors for commercial PC and VR games. We validated system correctness and real-time performance, and discuss feedback from several haptics developers that used the HapticSeer framework to integrate research and commercial haptic devices.