The ACM CHI Conference on Human Factors in Computing Systems (https://chi2022.acm.org/)

Personal Dream Informatics: A Self-Information Systems Model of Dream Engagement
Michael Jeffrey Daniel. Hoefer (University of Colorado Boulder, Boulder, Colorado, United States)Bryce E. Schumacher (University of Colorado Boulder, Boulder, Colorado, United States)Stephen Voida (University of Colorado Boulder, Boulder, Colorado, United States)
We present the research area of personal dream informatics: studying the self-information systems that support dream engagement and communication between the dreaming self and the wakeful self. Through a survey study of 281 individuals primarily recruited from an online community dedicated to dreaming, we develop a dream-information systems view of dreaming and dream tracking as a type of self-information system. While dream-information systems are characterized by diverse tracking processes, motivations, and outcomes, they are universally constrained by the ephemeral dreamset - the short period of time between waking up and rapid memory loss of dream experiences. By developing a system dynamics model of dreaming we highlight feedback loops that serve as high leverage points for technology designers, and suggest a variety of design considerations for crafting technology that best supports dream recall, dream tracking, and dreamwork for nightmare relief and personal development.
A Layered Authoring Tool for Stylized 3D animations
Jiaju Ma (Brown University, Providence, Rhode Island, United States)Li-Yi Wei (Adobe Research, San Jose, California, United States)Rubaiat Habib Kazi (Adobe Research, Seattle, Washington, United States)
Guided by the 12 principles of animation, stylization is a core 2D animation feature but has been utilized mainly by experienced animators. Although there are tools for stylizing 2D animations, creating stylized 3D animations remains a challenging problem due to the additional spatial dimension and the need for responsive actions like contact and collision. We propose a system that helps users create stylized casual 3D animations. A layered authoring interface is employed to balance between ease of use and expressiveness. Our surface level UI is a timeline sequencer that lets users add preset stylization effects such as squash and stretch and follow through to plain motions. Users can adjust spatial and temporal parameters to fine-tune these stylizations. These edits are propagated to our node-graph-based second level UI, in which the users can create custom stylizations after they are comfortable with the surface level UI. Our system also enables the stylization of interactions among multiple objects like force, energy, and collision. A pilot user study has shown that our fluid layered UI design allows for both ease of use and expressiveness better than existing tools.
Augmented Reality and Robotics: A Survey and Taxonomy for AR-enhanced Human-Robot Interaction and Robotic Interfaces
Ryo Suzuki (University of Calgary, Calgary, Alberta, Canada)Adnan Karim (University of Calgary, Calgary, Alberta, Canada)Tian Xia (University of Calgary, Calgary, Alberta, Canada)Hooman Hedayati (University of Colorado Boulder, Boulder, Colorado, United States)Nicolai Marquardt (University College London, London, United Kingdom)
This paper contributes to a taxonomy of augmented reality and robotics based on a survey of 460 research papers. Augmented and mixed reality (AR/MR) have emerged as a new way to enhance human-robot interaction (HRI) and robotic interfaces (e.g., actuated and shape-changing interfaces). Recently, an increasing number of studies in HCI, HRI, and robotics have demonstrated how AR enables better interactions between people and robots. However, often research remains focused on individual explorations and key design strategies, and research questions are rarely analyzed systematically. In this paper, we synthesize and categorize this research field in the following dimensions: 1) approaches to augmenting reality; 2) characteristics of robots; 3) purposes and benefits; 4) classification of presented information; 5) design components and strategies for visual augmentation; 6) interaction techniques and modalities; 7) application domains; and 8) evaluation strategies. We formulate key challenges and opportunities to guide and inform future research in AR and robotics.
"Chat Has No Chill": A Novel Physiological Interaction for Engaging Live Streaming Audiences
Raquel Breejon. Robinson (University of Saskatchewan, Saskatoon, Saskatchewan, Canada)Ricardo Rheeder (University of Saskatchewan, Saskatoon, Saskatchewan, Canada)Madison Klarkowski (University of Saskatchewan, Saskatoon, Saskatchewan, Canada)Regan L. Mandryk (University of Saskatchewan, Saskatoon, Saskatchewan, Canada)
Now more than ever, people are using online platforms to communicate. Twitch, the foremost platform for live game streaming, offers many communication modalities. However, the platform lacks representation of social cues and signals of the audience experience, which are innately present in live events. To address this, we present a technology probe that captures the audience energy and response in a game streaming context. We designed a game and integrated a custom-communication modality—Commons Sense—in which the audience members' heart rates are sensed via webcam, averaged, and fed into a video game to affect sound, lighting, and difficulty. We conducted an `in-the-wild' evaluation with four Twitch streamers and their audience members (N=55) to understand how these groups interacted through Commons Sense. Audience members and streamers indicated high levels of enjoyment and engagement with Commons Sense, suggesting the potential of physiological interaction as a beneficial communication tool in live streaming.
immersivePOV: Filming How-To Videos with a Head-Mounted 360° Action Camera
Kevin Huang (University of Toronto, Toronto, Ontario, Canada)Jiannan Li (University of Toronto, Toronto, Ontario, Canada)Mauricio Sousa (University of Toronto, Toronto, Ontario, Canada)Tovi Grossman (University of Toronto, Toronto, Ontario, Canada)
How-to videos are often shot using camera angles that may not be optimal for learning motor tasks, with a prevalent use of third-person perspective. We present \textit{immersivePOV}, an approach to film how-to videos from an immersive first-person perspective using a head-mounted 360° action camera. immersivePOV how-to videos can be viewed in a Virtual Reality headset, giving the viewer an eye-level viewpoint with three Degrees of Freedom. We evaluated our approach with two everyday motor tasks against a baseline first-person perspective and a third-person perspective. In a between-subjects study, participants were assigned to watch the task videos and then replicate the tasks. Results suggest that immersivePOV reduced perceived cognitive load and facilitated task learning. We discuss how immersivePOV can also streamline the video production process for content creators. Altogether, we conclude that immersivePOV is an effective approach to film how-to videos for learners and content creators alike.
Eyes-Off Your fingers: Gradual Surface Haptic Feedback Improves Eyes-Free Touchscreen Interaction.
Corentin Bernard (Aix Marseille Univ, CNRS, PRISM, Marseille, France)Jocelyn Monnoyer (CNRS / Aix-Marseille University, Marseille, France)Sølvi Ystad (Aix Marseille Univ, CNRS, PRISM, Marseille, France)Michael Wiertlewski (TU Delft, Delft, Netherlands)
Moving a slider to set the music volume or control the air conditioning is a familiar task that requires little attention. However, adjusting a virtual slider on a featureless touchscreen is much more demanding and can be dangerous in situations such as driving. Here, we study how a gradual tactile feedback, provided by a haptic touchscreen, can replace visual cues. As users adjust a setting with their finger, they feel a continuously changing texture, which spatial frequency correlates to the value of the setting.We demonstrate that, after training with visual and auditory feedback, users are able to adjust a setting on a haptic touchscreen without looking at the screen, thereby reducing visual distraction. Every learning strategy yielded similar performance, suggesting an amodal integration. This study shows that surface haptics can provide intuitive and precise tuning possibilities for tangible interfaces on touchscreens.
FlatMagic: Improving Flat Colorization through AI-driven Design for Digital Comic Professionals
Chuan Yan (George Mason University, Fairfax, Virginia, United States)John Joon Young. Chung (University of Michigan, Ann Arbor, Michigan, United States)Yoon Kiheon (Pusan National University, Pusan, Korea, Republic of)Yotam Gingold (George Mason University, Fairfax, Virginia, United States)Eytan Adar (University of Michigan, Ann Arbor, Michigan, United States)Sungsoo Ray Hong (George Mason University, Fairfax, Virginia, United States)
Creating digital comics involves multiple stages, some creative and some menial. For example, coloring a comic requires a labor-intensive stage known as 'flatting,' or masking segments of continuous color, as well as creative shading, lighting, and stylization stages. The use of AI can automate the colorization process, but early efforts have revealed limitations---technical and UX---to full automation. Via a formative study of professionals, we identify flatting as a bottleneck and key target of opportunity for human-guided AI-driven automation. Based on this insight, we built FlatMagic, an interactive, AI-driven flat colorization support tool for Photoshop. Our user studies found that using FlatMagic significantly reduced professionals' real and perceived effort versus their current practice. While participants effectively used FlatMagic, we also identified potential constraints in interactions with AI and partially automated workflows. We reflect on implications for comic-focused tools and the benefits and pitfalls of intermediate representations and partial automation in designing human-AI collaboration tools for professionals.
Shaping Textile Sliders: An Evaluation of Form Factors and Tick Marks for Textile Sliders
Oliver Nowak (RWTH Aachen University, Aachen, Germany)René Schäfer (RWTH Aachen University, Aachen, Germany)Anke Brocker (RWTH Aachen University, Aachen, Germany)Philipp Wacker (RWTH Aachen University, Aachen, Germany)Jan Borchers (RWTH Aachen University, Aachen, Germany)
Textile interfaces enable designers to integrate unobtrusive media and smart home controls into furniture such as sofas. While the technical aspects of such controllers have been the subject of numerous research projects, the physical form factor of these controls has received little attention so far. This work investigates how general design properties, such as overall slider shape, raised vs. recessed sliders, and number and layout of tick marks, affect users' preferences and performance. Our first user study identified a preference for certain design combinations, such as recessed, closed-shaped sliders. Our second user study included performance measurements on variations of the preferred designs from study 1, and took a closer look at tick marks. Tick marks supported orientation better than slider shape. Sliders with at least three tick marks were preferred, and performed well. Non-uniform, equally distributed tick marks reduced the movements users needed to orient themselves on the slider.
Supercharging Trial-and-Error for Learning Complex Software Applications
Damien Masson (Autodesk Research, Toronto, Ontario, Canada)Jo Vermeulen (Autodesk Research, Toronto, Ontario, Canada)George Fitzmaurice (Autodesk Research, Toronto, Ontario, Canada)Justin Matejka (Autodesk Research, Toronto, Ontario, Canada)
Despite an abundance of carefully-crafted tutorials, trial-and-error remains many people’s preferred way to learn complex software. Yet, approaches to facilitate trial-and-error (such as tooltips) have evolved very little since the 1980s. While existing mechanisms work well for simple software, they scale poorly to large feature-rich applications. In this paper, we explore new techniques to support trial-and-error in complex applications. We identify key benefits and challenges of trial-and-error, and introduce a framework with a conceptual model and design space. Using this framework, we developed three techniques: ToolTrack to keep track of trial-and-error progress; ToolTrip to go beyond trial-and-error of single commands by highlighting related commands that are frequently used together; and ToolTaste to quickly and safely try commands. We demonstrate how these techniques facilitate trial-and-error, as illustrated through a proof-of-concept implementation in the CAD software Fusion 360. We conclude by discussing possible scenarios and outline directions for future research on trial-and-error.
Logic Bonbon: Exploring Food as Computational Artifact
Jialin Deng (Monash University, Melbourne, Victoria, Australia)Patrick Olivier (Monash University, Melbourne, Victoria, Australia)Josh Andres (The Australian National University, Canberra, Australian Capital Territory, Australia)Kirsten Ellis (Monash University, Melbourne, Vic, Australia)Ryan Wee (Monash University, Melbourne, Victoria, Australia)Florian ‘Floyd’. Mueller (Monash University, Melbourne, VIC, Australia)
In recognition of food’s significant experiential pleasures, culinary practitioners and designers are increasingly exploring novel combinations of computing technologies and food. However, despite much creative endeavors, proposals and prototypes have so far largely maintained a traditional divide, treating food and technology as separate entities. In contrast, we present a “Research through Design” exploration of the notion of food as computational artifact: wherein food itself is the material of computation. We describe the Logic Bonbon, a dessert that can hydrodynamically regulate its flavor via a fluidic logic system. Through a study of experiencing the Logic Bonbon and reflection on our design practice, we offer a provisional account of how food as computational artifact can mediate new interactions through a novel approach to food-computation integration, that promotes an enriched future of Human-Food Interaction.
Prevalence and Salience of Problematic Microtransactions in Top-Grossing Mobile and PC Games: A Content Analysis of User Reviews
Elena Petrovskaya (University of York, York, United Kingdom)Sebastian Deterding (University of York, York, United Kingdom)David I. Zendle (University of York, York, North Yorkshire, United Kingdom)
Microtransactions have become a major monetisation model in digital games, shaping their design, impacting their player experience, and raising ethical concerns. Research in this area has chiefly focused on loot boxes. This begs the question whether other microtransactions might actually be more relevant and problematic for players. We therefore conducted a content analysis of negative player reviews (n=801) of top-grossing mobile and desktop games to determine which problematic microtransactions are most prevalent and salient for players. We found that problematic microtransactions with mobile games featuring more frequent and different techniques compared to desktop games. Across both, players minded issues related to fairness, transparency, and degraded user experience, supporting prior theoretical work, and importantly take issue with monetisation-driven design as such. We identify future research needs on why microtransactions in particular spark this critique, and which player communities it may be more or less representative of.
Does Dynamically Drawn Text Improve Learning? Investigating the Effect of Text Presentation Styles in Video Learning
Ashwin Ram (National University of Singapore, Singapore, Singapore)Shengdong Zhao (National University of Singapore, Singapore, Singapore)
Dynamically drawn content (e.g., handwritten text) in learning videos is believed to improve users’ engagement and learning over static powerpoint-based ones. However, evidence from existing literature is inconclusive. With the emergence of Optical Head-Mounted Displays (OHMDs), recent work has shown that video learning can be adapted for on-the-go scenarios. To better understand the role of dynamic drawing, we decoupled dynamically drawn text into two factors (font style and motion of appearance) and studied their impact on learning performance under two usage scenarios (while seated with desktop and walking with OHMD). We found that although letter-traced text was more engaging for some users, most preferred learning with typeface text that displayed the entire word at once and achieved better recall (46.7% higher), regardless of the usage scenarios. Insights learned from the studies can better inform designers on how to present text in videos for ubiquitous access.
Towards Understanding Diminished Reality
Yi Fei Cheng (Swarthmore College, Swarthmore, Pennsylvania, United States)Hang Yin (Carnegie Mellon University, Pittsburgh, Pennsylvania, United States)Yukang Yan (Tsinghua University, Beijing, China)Jan Gugenheimer (Institut Polytechnique de Paris, Paris, France)David Lindlbauer (Carnegie Mellon University, Pittsburgh, Pennsylvania, United States)
Diminished reality (DR) refers to the concept of removing content from a user's visual environment. While its implementation is becoming feasible, it is still unclear how users perceive and interact in DR-enabled environments and what applications it benefits. To address this challenge, we first conduct a formative study to compare user perceptions of DR and mediated reality effects (e.g., changing the color or size of target elements) in four example scenarios. Participants preferred removing objects through opacity reduction (i.e., the standard DR implementation) and appreciated mechanisms for maintaining a contextual understanding of diminished items (e.g., outlining). In a second study, we explore the user experience of performing tasks within DR-enabled environments. Participants selected which objects to diminish and the magnitude of the effects when performing two separate tasks (video viewing, assembly). Participants were comfortable with decreased contextual understanding, particularly for less mobile tasks. Based on the results, we define guidelines for creating general DR-enabled environments.
Exploring Spatial UI Transition Mechanisms with Head-Worn Augmented Reality
Feiyu Lu (Virginia Tech, Blacksburg, Virginia, United States)Yan Xu (Facebook, Redmond, Washington, United States)
Imagine in the future people comfortably wear augmented reality (AR) displays all day, how do we design interfaces that adapt to the contextual changes as people move around? In current operating systems, the majority of AR content defaults to staying at a fixed location until being manually moved by the users. However, this approach puts the burden of user interface (UI) transition solely on users. In this paper, we first ran a bodystorming design workshop to capture the limitations of existing manual UI transition approaches in spatially diverse tasks. Then we addressed these limitations by designing and evaluating three UI transition mechanisms with different levels of automation and controllability (low-effort manual, semi-automated, fully-automated). Furthermore, we simulated imperfect contextual awareness by introducing prediction errors with different costs to correct them. Our results provide valuable lessons about the trade-offs between UI automation levels, controllability, user agency, and the impact of prediction errors.
Mobile-Friendly Content Design for MOOCs: Challenges, Requirements, and Design Opportunities
Jeongyeon Kim (KAIST, Daejeon, Korea, Republic of)Yubin Choi (KAIST, Daejeon, Korea, Republic of)Meng Xia (KAIST, Daejeon, Korea, Republic of)Juho Kim (KAIST, Daejeon, Korea, Republic of)
Most video-based learning content is designed for desktops without considering mobile environments. We (1) investigate the gap between mobile learners’ challenges and video engineers’ considerations using mixed methods and (2) provide design guidelines for creating mobile-friendly MOOC videos. To uncover learners’ challenges, we conducted a survey (n=134) and interviews (n=21), and evaluated the mobile adequacy of current MOOCs by analyzing 41,722 video frames from 101 video lectures. Interview results revealed low readability and situationally-induced impairments as major challenges. The content analysis showed a low guideline compliance rate for key design factors. We then interviewed 11 video production engineers to investigate design factors they mainly consider. The engineers mainly focus on the size and amount of content while lacking consideration for color, complex images, and situationally-induced impairments. Finally, we present and validate guidelines for designing mobile-friendly MOOCs, such as providing adaptive and customizable visual design and context-aware accessibility support.
Janus Screen: A Screen with Switchable Projection Surfaces Using Wire Grid Polarizer
Wataru Yamada (NTT DOCOMO, INC., Tokyo, Japan)Sawa Korogi (NTT DOCOMO, INC., Tokyo, Japan)Keiichi Ochiai (NTT DOCOMO, INC., Tokyo, Japan)
In this paper, we present a novel screen system employing polarizers that allow switching of the projection surface to the front, rear, or both sides using only two projectors on one side. In this system, we propose a method that employs two projectors equipped with polarizers and a multi-layered screen comprising an anti-reflective plate, transparent screen, and wire grid polarizer. The multi-layered screen changes whether the projected image is shown on the front or rear side of the screen depending on the polarization direction of the incident light. Hence, the proposed method can project images on the front, rear, or both sides of the screen by projecting images from either or both projectors using polarizers. In addition, the proposed method can be easily deployed by simply attaching multiple optical films. We implement a prototype and confirm that the proposed method can selectively switch the projection surface.
"I Didn't Know I Looked Angry": Characterizing Observed Emotion and Reported Affect at Work
Harmanpreet Kaur (University of Michigan, Ann Arbor, Michigan, United States)Daniel McDuff (Microsoft, Seattle, Washington, United States)Alex C. Williams (University of Tennessee, Knoxville, Knoxville, Tennessee, United States)Jaime Teevan (Microsoft, Redmond, Washington, United States)Shamsi Iqbal (Microsoft Research, Redmond, Washington, United States)
With the growing prevalence of affective computing applications, Automatic Emotion Recognition (AER) technologies have garnered attention in both research and industry settings. Initially limited to speech-based applications, AER technologies now include analysis of facial landmarks to provide predicted probabilities of a common subset of emotions (e.g., anger, happiness) for faces observed in an image or video frame. In this paper, we study the relationship between AER outputs and self-reports of affect employed by prior work, in the context of information work at a technology company. We compare the continuous observed emotion output from an AER tool to discrete reported affect obtained via a one-day combined tool-use and diary study (N=15). We provide empirical evidence showing that these signals do not completely align, and find that using additional workplace context only improves alignment up to 58.6%. These results suggest affect must be studied in the context it is being expressed, and observed emotion signal should not replace internal reported affect for affective computing applications.
O&O: A DIY toolkit for designing and rapid prototyping olfactory interfaces
Yuxuan Lei (Tsinghua University, Beijing, China)Qi Lu (Tsinghua University, Beijing, China)Yingqing Xu (Tsinghua University, Beijing, China)
Constructing olfactory interfaces on demand requires significant design proficiency and engineering effort. The absence of powerful and convenient tools that reduced innovation complexity posed obstacles for future research in the area. To address this problem, we proposed O&O, a modular olfactory interface DIY toolkit. The toolkit consists of: (1) a scent generation kit, a set of electronics and accessories that supported three common scent vaporization techniques; (2) a module construction kit, a set of primitive cardboard modules for assembling permutable functional structures; (3) a design manual, a step-by-step design thinking framework that directs the decision-making and prototyping process. We organized a formal workshop with 19 participants and four solo DIY trials to evaluate the capability of the toolkit, the overall user engagement, the creations in both sessions, and the iterative suggestions. Finally, design implications and future opportunities were discussed for further research.
Barriers to Expertise in Citizen Science Games
Josh Aaron Miller (Northeastern University, Boston, Massachusetts, United States)Seth Cooper (Northeastern University, Boston, Massachusetts, United States)
Expertise-centric citizen science games (ECCSGs) can be powerful tools for crowdsourcing scientific knowledge production. However, to be effective these games must train their players on how to become experts, which is difficult in practice. In this study, we investigated the path to expertise and the barriers involved by interviewing players of three ECCSGs: Foldit, Eterna, and Eyewire. We then applied reflexive thematic analysis to generate themes of their experiences and produce a model of expertise and its barriers. We found expertise is constructed through a cycle of exploratory and social learning but prevented by instructional design issues. Moreover, exploration is slowed by a lack of polish to the game artifact, and social learning is disrupted by a lack of clear communication. Based on our analysis we make several recommendations for CSG developers, including: collaborating with professionals of required skill sets; providing social features and feedback systems; and improving scientific communication.
A Model Predictive Control Approach for Reach Redirection in Virtual Reality
Eric J. Gonzalez (Stanford University, Stanford, California, United States)Elyse D. Z.. Chase (Stanford University, Stanford, California, United States)Pramod Kotipalli (Stanford University, Stanford, California, United States)Sean Follmer (Stanford University, Stanford, California, United States)
Reach redirection is an illusion-based virtual reality (VR) interaction technique where a user’s virtual hand is shifted during a reach in order to guide their real hand to a physical location. Prior works have not considered the underlying sensorimotor processes driving redirection. In this work, we propose adapting a sensorimotor model for goal-directed reach to obtain a model for visually-redirected reach, specifically by incorporating redirection as a sensory bias in the state estimate used by a minimum jerk motion controller. We validate and then leverage this model to develop a Model Predictive Control (MPC) approach for reach redirection, enabling the real-time generation of spatial warping according to desired optimization criteria (e.g., redirection goals) and constraints (e.g., sensory thresholds). We illustrate this approach with two example criteria -- redirection to a desired point and redirection along a desired path -- and compare our approach against existing techniques in a user evaluation.
OVRlap: Perceiving Multiple Locations Simultaneously to Improve Interaction in VR
Jonas Schjerlund (University of Copenhagen, Copenhagen, Denmark)Kasper Hornbæk (University of Copenhagen, Copenhagen, Denmark)Joanna Bergström (University of Copenhagen, Copenhagen, Denmark)
We introduce OVRlap, a VR interaction technique that lets the user perceive multiple places simultaneously from a first-person perspective. OVRlap achieves this by overlapping viewpoints. At any time, only one viewpoint is active, meaning that the user may interact with objects therein. Objects seen from the active viewpoint are opaque, whereas objects seen from passive viewpoints are transparent. This allows users to perceive multiple locations at once and easily switch to the one in which they want to interact. We compare OVRlap and a single-viewpoint technique in a study where 20 participants complete object-collection and monitoring tasks. We find that participants are significantly faster and move their head significantly less with OVRlap in both tasks. We propose how the technique might be improved through automated switching of the active viewpoint and intelligent viewpoint rendering.
ReCompFig: Designing Dynamically Reconfigurable Kinematic Devices Using Compliant Mechanisms and Tensioning Cables
Humphrey Yang (Carnegie Mellon University, Pittsburgh, Pennsylvania, United States)Tate Johnson (Carnegie Mellon University, Pittsburgh, Pennsylvania, United States)Ke Zhong (Carnegie Mellon University , Pittsburgh , Pennsylvania, United States)Dinesh K. Patel (Carnegie Mellon University , Pittsburgh , Pennsylvania, United States)Gina Olson (Carnegie Mellon University, Pittsburgh, Pennsylvania, United States)Carmel Majidi (Carnegie Mellon University, Pittsburgh, Pennsylvania, United States)Mohammad Islam (Materials Science and Engineering, Pittsburgh, Pennsylvania, United States)Lining Yao (Carnegie Mellon University, Pittsburgh, Pennsylvania, United States)
From creating input devices to rendering tangible information, the field of HCI is interested in using kinematic mechanisms to create human-computer interfaces. Yet, due to fabrication and design challenges, it is often difficult to create kinematic devices that are compact and have multiple reconfigurable motional degrees of freedom (DOFs) depending on the interaction scenarios. In this work, we combine compliant mechanisms (CMs) with tensioning cables to create dynamically reconfigurable kinematic mechanisms. The devices’ kinematics (DOFs) is enabled and determined by the layout of bendable rods. The additional cables function as on-demand motion constraints that can dynamically lock or unlock the mechanism’s DOFs as they are tightened or loosened. We provide algorithms and a design tool prototype to help users design such kinematic devices. We also demonstrate various HCI use cases including a kinematic haptic display, a haptic proxy, and a multimodal input device.
Virtual Transcendent Dream: Empowering People through Embodied Flying in VR
Pinyao Liu (Simon Fraser University, Surrey, British Columbia, Canada)Ekaterina R.. Stepanova (Simon Fraser University, Surrey, British Columbia, Canada)Alexandra Kitson (Simon Fraser University, Surrey, British Columbia, Canada)Thecla Schiphorst (Simon Fraser University, Vancouver, British Columbia, Canada)Bernhard E.. Riecke (Simon Fraser University, Vancouver, British Columbia, Canada)
Flying dreams have the potential to evoke a feeling of empowerment (or self-efficacy, confidence in our ability to succeed) and self-transcendent experience (STE), which have been shown to contribute to an individual’s overall well-being. However, these exceptional dreaming experiences remain difficult to induce at will. Inspired by the potential of Virtual Reality (VR) to support profound emotional experiences, we explored if a VR flying interface with more embodied self-motion cues could contribute to the benefits associated with flying dreams (i.e., STE and empowerment). Our results indicated that a flying interface with more self-motion cues indeed better supported STE and empowerment. We derived several design considerations: obscurity, extraordinary light and supportive setting. Our results contribute to the discourse around design guidelines for self-transcendence and empowerment in VR, which may further be applied to the improvement of mental well-being.
DreamStream: Immersive and Interactive spectating in VR
Balasaravanan Thoravi Kumaravel (UC Berkeley, Berkeley, California, United States)Andrew D. Wilson (Microsoft Research, Redmond, Washington, United States)
Today spectating and streaming virtual reality (VR) activities typically involves spectators viewing a 2D stream of the VR user’s view. Streaming 2D videos of the game play is popular and well-supported by platforms such as Twitch. However, the generic streaming of full 3D representations is less explored. Thus, while the VR player’s experience may be fully immersive, spectators are limited to 2D videos. This asymmetry lessens the overall experience for spectators, who themselves may be eager to spectate in VR. DreamStream puts viewers in the virtual environment of the VR application, allowing them to look “over the shoulder” of the VR player. Spectators can view streamed VR content immersively in 3D, independently explore the VR scene beyond what the VR player sees and ultimately cohabit the virtual environment alongside the VR player. For the VR player, DreamStream provides a spatial awareness of all their spectators. DreamStream retrofits and works with existing VR applications. We discuss the design and implementation of DreamStream, and carry out three qualitative informal evaluations. These evaluations shed light on the strengths and weakness of using DreamStream for the purpose of interactive spectating. Our participants found that DreamStream’s VR viewer interface offered increased immersion, and made it easier to communicate and interact with the VR player.
Interpolating Happiness: Understanding the Intensity Gradations of Face Emojis Across Cultures
Andrey Krekhov (University of Duisburg-Essen, Duisburg, NRW, Germany)Katharina Emmerich (University of Duisburg-Essen, Duisburg, NRW, Germany)Johannes Fuchs (University of Konstanz, Konstanz, Germany)Jens Harald. Krueger (University of Duisburg-Essen, Duisburg, NRW, Germany)
We frequently utilize face emojis to express emotions in digital communication. But how wholly and precisely do such pictographs sample the emotional spectrum, and are there gaps to be closed? Our research establishes emoji intensity scales for seven basic emotions: happiness, anger, disgust, sadness, shock, annoyance, and love. In our survey (N = 1195), participants worldwide assigned emotions and intensities to 68 face emojis. According to our results, certain feelings, such as happiness or shock, are visualized by manifold emojis covering a broad spectrum of intensities. Other feelings, such as anger, have limited and only very intense representative visualizations. We further emphasize that the cultural background influences emojis' perception: for instance, linear-active cultures (e.g., UK, Germany) rate the intensity of such visualizations higher than multi-active (e.g., Brazil, Russia) or reactive cultures (e.g., Indonesia, Singapore). To summarize, our manuscript promotes future research on more expressive, culture-aware emoji design.
FlexHaptics: A Design Method for Passive Haptic Inputs Using Planar Compliant Structures
Hongnan Lin (Georgia Institute of Technology , Atlanta, Georgia, United States)Liang He (University of Washington, Seattle, Washington, United States)Fangli Song (School of design, Atlanta, Georgia, United States)Yifan Li (Georgia Institute of Technology , Atlanta, Georgia, United States)Tingyu Cheng (Interactive Computing, Atlanta, Georgia, United States)Clement Zheng (National University of Singapore, Singapore, Singapore, Singapore)Wei Wang (Hunan University, Changsha, China)HyunJoo Oh (Georgia Institute of Technology, Atlanta, Georgia, United States)
This paper presents FlexHaptics, a design method for creating custom haptic input interfaces. Our approach leverages planar compliant structures whose force-deformation relationship can be altered by adjusting the geometries. Embedded with such structures, a FlexHaptics module exerts a fine-tunable haptic effect (i.e., resistance, detent, or bounce) along a movement path (i.e., linear, rotary, or ortho-planar). These modules can work separately or combine into an interface with complex movement paths and haptic effects. To enable the parametric design of FlexHaptic modules, we provide a design editor that converts user-specified haptic properties into underlying mechanical structures of haptic modules. We validate our approach and demonstrate the potential of FlexHaptic modules through six application examples, including a slider control for a painting application and a piano keyboard interface on touchscreens, a tactile low vision timer, VR game controllers, and a compound input device of a joystick and a two-step button.
FingerX: Rendering Haptic Shape of Virtual Objects Augmented from Real Objects using Extendable and Withdrawable Supports on Fingers
Hsin-Ruey Tsai (National Chengchi University, Taipei, Taiwan)Chieh Tsai (National Chengchi University, Taipei, Taiwan)Yu-So Liao (National Chengchi University, Taipei, Taiwan)Yi-Ting Chiang (National Chengchi University, Taipei, Taiwan)Zhong-Yi Zhang (National Chengchi University, Taipei City, Taiwan)
Interacting with not only virtual but also real objects, or even virtual objects augmented by real objects becomes a trend of virtual reality (VR) interactions and is common in augmented reality (AR). However, current haptic shape rendering devices generally focus on feedback of virtual objects, and require the users to put down or take off those devices to perceive real objects. Therefore, we propose FingerX to render haptic shapes and enable users to touch, grasp and interact with virtual and real objects simultaneously. An extender on the fingertip extends to a corresponding height to support between the fingertip and the real objects or the hand, to render virtual shapes. A ring rotates and withdraws the extender behind the fingertip when touching real objects. By independently controlling four extenders and rings on each finger with the exception of the pinky finger, FingerX renders feedback in three common scenarios, including touching virtual objects augmented by real environments (e.g., a desk), grasping virtual objects augmented by real objects (e.g., a bottle) and grasping virtual objects in the hand. We conducted a shape recognition study to evaluate the recognition rates for these three scenarios and obtained an average recognition rate of 76.59% with shape visual feedback. We then performed a VR study to observe how users interact with virtual and real objects simultaneously and verify that FingerX significantly enhances VR realism, compared to current vibrotactile methods.
"Your Eyes Say You Have Used This Password Before": Identifying Password Reuse from Gaze Behavior and Keystroke Dynamics
Yasmeen Abdrabou (Bundeswehr University Munich, Munich, Bayern, Germany)Johannes Schütte (Bundeswehr University Munich, Munich, Germany)Ahmed Shams (German University in Cairo, Cairo, Egypt)Ken Pfeuffer (Aarhus University, Aarhus, Denmark)Daniel Buschek (University of Bayreuth, Bayreuth, Germany)Mohamed Khamis (University of Glasgow, Glasgow, United Kingdom)Florian Alt (Bundeswehr University Munich, Munich, Germany)
A significant drawback of text passwords for end-user authentication is password reuse. We propose a novel approach to detect password reuse by leveraging gaze as well as typing behavior and study its accuracy. We collected gaze and typing behavior from 49 users while creating accounts for 1) a webmail client and 2) a news website. While most participants came up with a new password, 32% reported having reused an old password when setting up their accounts. We then compared different ML models to detect password reuse from the collected data. Our models achieve an accuracy of up to 87.7% in detecting password reuse from gaze, 75.8% accuracy from typing, and 88.75% when considering both types of behavior. We demonstrate that \revised{using gaze, password} reuse can already be detected during the registration process, before users entered their password. Our work paves the road for developing novel interventions to prevent password reuse.
Causality-preserving Asynchronous Reality
Andreas Rene. Fender (ETH Zürich, Zurich, Switzerland)Christian Holz (ETH Zürich, Zurich, Switzerland)
Mixed Reality is gaining interest as a platform for collaboration and focused work to a point where it may supersede current office settings in future workplaces. At the same time, we expect that interaction with physical objects and face-to-face communication will remain crucial for future work environments, which is a particular challenge in fully immersive Virtual Reality. In this work, we reconcile those requirements through a user's individual Asynchronous Reality, which enables seamless physical interaction across time. When a user is unavailable, e.g., focused on a task or in a call, our approach captures co-located or remote physical events in real-time, constructs a causality graph of co-dependent events, and lets immersed users revisit them at a suitable time in a causally accurate way. Enabled by our system AsyncReality, we present a workplace scenario that includes walk-in interruptions during a person's focused work, physical deliveries, and transient spoken messages. We then generalize our approach to a use-case agnostic concept and system architecture. We conclude by discussing the implications of an Asynchronous Reality for future offices.
Paracentral and near-peripheral visualizations: Towards attention-maintaining secondary information presentation on OHMDs during in-person social interactions
Nuwan Nanayakkarawasam Peru Kandage. Janaka (National University of Singapore, Singapore, Singapore)Chloe Haigh (National University of Singapore, Singapore, Singapore)Hyeongcheol Kim (National University of Singapore, Singapore , Singapore)Shan Zhang (National University of Singapore, Singapore, Singapore)Shengdong Zhao (National University of Singapore, Singapore, Singapore)
Optical see-through Head-Mounted Displays (OST HMDs, OHMDs) are known to facilitate situational awareness while accessing secondary information. However, information displayed on OHMDs can cause attention shifts, which distract users from natural social interactions. We hypothesize that information displayed in paracentral and near-peripheral vision can be better perceived while the user is maintaining eye contact during face-to-face conversations. Leveraging this idea, we designed a circular progress bar to provide progress updates in paracentral and near-peripheral vision. We compared it with textual and linear progress bars under two conversation settings: a simulated one with a digital conversation partner and a realistic one with a real partner. Results show that a circular progress bar can effectively reduce notification distractions without losing eye contact and is more preferred by users. Our findings highlight the potential of utilizing the paracentral and near-peripheral vision for secondary information presentation on OHMDs.
Hand Interfaces: Using Hands to Imitate Objects in AR/VR for Expressive Interactions
Siyou Pei (University of California, Los Angeles, Los Angeles, California, United States)Alexander Chen (University of California, Los Angeles, Los Angeles, California, United States)Jaewook Lee (University of Illinois at Urbana-Champaign, Urbana, Illinois, United States)Yang Zhang (University of California, Los Angeles, Los Angeles, California, United States)
Augmented reality (AR) and virtual reality (VR) technologies create exciting new opportunities for people to interact with computing resources and information. Less exciting is the need for holding hand controllers, which limits applications that demand expressive, readily available interactions. Prior research investigated freehand AR/VR input by transforming the user's body into an interaction medium. In contrast to previous work that has users' hands grasp virtual objects, we propose a new interaction technique that lets users' hands become virtual objects by imitating the objects themselves. For example, a thumbs-up hand pose is used to mimic a joystick. We created a wide array of interaction designs around this idea to demonstrate its applicability in object retrieval and interactive control tasks. Collectively, we call these interaction designs Hand Interfaces. From a series of user studies comparing Hand Interfaces against various baseline techniques, we collected quantitative and qualitative feedback, which indicates that Hand Interfaces are effective, expressive, and fun to use.
First Steps Towards Designing Electrotactons: Investigating Intensity and Pulse Frequency as Parameters for Electrotactile Cues.
Yosuef Alotaibi (University of Glasgow , Glasgow , United Kingdom)John H. Williamson (University of Glasgow, Glasgow, United Kingdom)Stephen Anthony. Brewster (University of Glasgow, Glasgow, United Kingdom)
Electrotactile stimulation is a novel form of haptic feedback. There is little work investigating its basic design parameters and how they create effective tactile cues. This paper describes two experiments that extend our knowledge of two key parameters. The first investigated the combination of pulse width and amplitude Intensity on sensations of urgency, annoyance, valence and arousal. Results showed significant effects: increasing Intensity caused higher ratings of urgency, annoyance and arousal but reduced valence. We established clear levels for differentiating each sensation. A second study then investigated Intensity and Pulse Frequency to find out how many distinguishable levels could be perceived. Results showed that both Intensity and Pulse Frequency significantly affected perception, with four distinguishable levels of Intensity and two of Pulse Frequency. These results add significant new knowledge about the parameter space of electrotactile cue design and help designers select suitable properties to use when creating electrotactile cues.
Adaptive Empathy Learning Support in Peer Review Scenarios
Thiemo Wambsganss (University of St. Gallen, Sankt Gallen, Switzerland)Matthias Soellner (University of Kassel, Kassel, Germany)Kenneth R. Koedinger (Carnegie Mellon University, Pittsburgh, Pennsylvania, United States)Jan Marco Leimeister (University of St. Gallen, St. Gallen, Switzerland)
Advances in Natural Language Processing offer techniques to detect the empathy level in texts. To test if individual feedback on certain students’ empathy level in their peer review writing process will help them to write more empathic reviews, we developed ELEA, an adaptive writing support system that provides students with feedback on the cognitive and emotional empathy structures. We compared ELEA to a proven empathy support tool in a peer review setting with 119 students. We found students using ELEA wrote more empathic peer reviews with a higher level of emotional empathy compared to the control group. The high perceived skill learning, the technology acceptance, and the level of enjoyment provide promising results to use such an approach as a feedback application in traditional learning settings. Our results indicate that learning applications based on NLP are able to foster empathic writing skills of students in peer review scenarios.
ReflecTouch: Detecting Grasp Posture of Smartphone Using Corneal Reflection Images
Xiang Zhang (Keio University, Yokohama City, Japan)Kaori Ikematsu (Yahoo Japan Corporation, Tokyo, Japan)Kunihiro Kato (Tokyo University of Technology, Tokyo, Japan)Yuta Sugiura (Keio University, Yokohama City, Japan)
By sensing how a user is holding a smartphone, adaptive user interfaces are possible such as those that automatically switch the displayed content and position of graphical user interface (GUI) components following how the phone is being held. We propose ReflecTouch, a novel method for detecting how a smartphone is being held by capturing images of the smartphone screen reflected on the cornea with a built-in front camera. In these images, the areas where the user places their fingers on the screen appear as shadows, which makes it possible to estimate the grasp posture. Since most smartphones have a front camera, this method can be used regardless of the device model; in addition, no additional sensor or hardware is required. We conducted data collection experiments to verify the classification accuracy of the proposed method for six different grasp postures, and the accuracy was 85%.
Prediction for Retrospection: Integrating Algorithmic Stress Prediction into Personal Informatics Systems for College Students' Mental Health
Taewan Kim (KAIST, Daejeon, Korea, Republic of)Haesoo Kim (KAIST, Daejeon, Korea, Republic of)Ha Yeon Lee (Seoul National University, Seoul, Korea, Republic of)Hwarang Goh (Inha University, Incheon, Korea, Republic of)Shakhboz Abdigapporov (Inha University, Michuhol-gu, Incheon, Korea, Republic of)Mingon Jeong (Hanyang University, Seoul, Korea, Republic of)Hyunsung Cho (Carnegie Mellon University, Pittsburgh, Pennsylvania, United States)Kyungsik Han (Hanyang University, Seoul, Korea, Republic of)Youngtae Noh (KENTECH, Naju-si, Jeollanam-do, Korea, Republic of)Sung-Ju Lee (KAIST, Daejeon, Korea, Republic of)Hwajung Hong (KAIST, Deajeon, Korea, Republic of)
Reflecting on stress-related data is critical in addressing one’s mental health. Personal Informatics (PI) systems augmented by algorithms and sensors have become popular ways to help users collect and reflect on data about stress. While prediction algorithms in the PI systems are mainly for diagnostic purposes, few studies examine how the explainability of algorithmic prediction can support user-driven self-insight. To this end, we developed MindScope, an algorithm-assisted stress management system that determines user stress levels and explains how the stress level was computed based on the user's everyday activities captured by a smartphone. In a 25-day field study conducted with 36 college students, the prediction and explanation supported self-reflection, a process to re-establish preconceptions about stress by identifying stress patterns and recalling past stress levels and patterns that led to coping planning. We discuss the implications of exploiting prediction algorithms that facilitate user-driven retrospection in PI systems.
Paper Trail: An Immersive Authoring System for Augmented Reality Instructional Experiences
Shwetha Rajaram (University of Michigan, Ann Arbor, Michigan, United States)Michael Nebeling (University of Michigan, Ann Arbor, Michigan, United States)
Prior work has demonstrated augmented reality's benefits to education, but current tools are difficult to integrate with traditional instructional methods. We present Paper Trail, an immersive authoring system designed to explore how to enable instructors to create AR educational experiences, leaving paper at the core of the interaction and enhancing it with various forms of digital media, animations for dynamic illustrations, and clipping masks to guide learning. To inform the system design, we developed five scenarios exploring the benefits that hand-held and head-worn AR can bring to STEM instruction and developed a design space of AR interactions enhancing paper based on these scenarios and prior work. Using the example of an AR physics handout, we assessed the system's potential with PhD-level instructors and its usability with XR design experts. In an elicitation study with high-school teachers, we study how Paper Trail could be used and extended to enable flexible use cases across various domains. We discuss benefits of immersive paper for supporting diverse student needs and challenges for making effective use of AR for learning.
Do You See What You Mean? Using Predictive Visualizations to Reduce Optimism in Duration Estimates
Morgane Koval (CNRS, ISIR, Paris, France)Yvonne Jansen (Sorbonne Université, CNRS, ISIR, Paris, France)
Making time estimates, such as how long a given task might take, frequently leads to inaccurate predictions because of an optimistic bias. Previous attempts to alleviate this bias, including decomposing the task into smaller components and listing potential surprises, have not shown any major improvement. This article builds on the premise that these procedures may have failed because they involve compound probabilities and mixture distributions which are difficult to compute in one's head. We hypothesize that predictive visualizations of such distributions would facilitate the estimation of task durations. We conducted a crowdsourced study in which 145 participants provided different estimates of overall and sub-task durations and we used these to generate predictive visualizations of the resulting mixture distributions. We compared participants' initial estimates with their updated ones and found compelling evidence that predictive visualizations encourage less optimistic estimates.
InfraredTags: Embedding Invisible AR Markers and Barcodes Using Low-Cost, Infrared-Based 3D Printing and Imaging Tools
Mustafa Doga Dogan (MIT CSAIL, Cambridge, Massachusetts, United States)Ahmad Taka (MIT CSAIL, Cambridge, Massachusetts, United States)Michael Lu (MIT CSAIL, Cambridge, Massachusetts, United States)Yunyi Zhu (MIT CSAIL, Cambridge, Massachusetts, United States)Akshat Kumar (MIT CSAIL, Cambridge, Massachusetts, United States)Aakar Gupta (Facebook Inc, Redmond, Washington, United States)Stefanie Mueller (MIT CSAIL, Cambridge, Massachusetts, United States)
Existing approaches for embedding unobtrusive tags inside 3D~objects require either complex fabrication or high-cost imaging equipment. We present InfraredTags, which are 2D markers and barcodes imperceptible to the naked eye that can be 3D printed as part of objects, and detected rapidly by low-cost near-infrared cameras. We achieve this by printing objects from an infrared-transmitting filament, which infrared cameras can see through, and by having air gaps inside for the tag's bits, which appear at a different intensity in the infrared image. We built a user interface that facilitates the integration of common tags (QR codes, ArUco markers) with the object geometry to make them 3D printable as InfraredTags. We also developed a low-cost infrared imaging module that augments existing mobile devices and decodes tags using our image processing pipeline. Our evaluation shows that the tags can be detected with little near-infrared illumination (0.2lux) and from distances as far as 250cm. We demonstrate how our method enables various applications, such as object tracking and embedding metadata for augmented reality and tangible interactions.
FabricatINK: Personal Fabrication of Bespoke Displays Using Electronic Ink from Upcycled E Readers
Ollie Hanton (University of Bristol, Bristol, United Kingdom)Zichao Shen (University of Bristol, Bristol, United Kingdom)Mike Fraser (University of Bath, Bath, United Kingdom)Anne Roudaut (University of Bristol, Bristol, United Kingdom)
FabricatINK explores the personal fabrication of irregularly-shaped low-power displays using electronic ink (E ink). E ink is a programmable bicolour material used in traditional form-factors such as E readers. It has potential for more versatile use within the scope of personal fabrication of custom-shaped displays, and it has the promise to be the pre-eminent material choice for this purpose. We appraise technical literature to identify properties of E ink, suited to fabrication. We identify a key roadblock, universal access to E ink as a material, and we deliver a method to circumvent this by upcycling broken electronics. We subsequently present a novel fabrication method for irregularly-shaped E ink displays. We demonstrate our fabrication process and E ink's versatility through ten prototypes showing different applications and use cases. By addressing E ink as a material for display fabrication, we uncover the potential for users to create custom-shaped truly bistable displays.
Designing Visuo-Haptic Illusions with Proxies in Virtual Reality: Exploration of Grasp, Movement Trajectory and Object Mass
Martin Feick (Saarland Informatics Campus, Saarbrücken, Germany)Kora Persephone. Regitz (Saarland Informatics Campus, Saarbrücken, Germany)Anthony Tang (University of Toronto, Toronto, Ontario, Canada)Antonio Krüger (DFKI, Saarland Informatics Campus, Saarbrücken, Germany)
Visuo-haptic illusions are a method to expand proxy-based interactions in VR by introducing unnoticeable discrepancies between the virtual and real-world. Yet, how different design variables affect the illusions with proxies is still unclear. To unpack a subset of variables, we conducted two user studies with 48 participants to explore the impact of (1) different grasping types and movement trajectories, and (2) different grasping types and object masses on the discrepancy which may be introduced. Our Bayes analysis suggests that grasping types and object masses (≤ 500 g) did not noticeably affect the discrepancy, but for movement trajectory, results were inconclusive. Further, we identified a significant difference between un-/restricted movement trajectories. Our data shows considerable differences in participants’ proprioceptive accuracy, which seem to correlate with their prior VR experience. Finally, we illustrate the impact of our key findings on the visuo-haptic illusion design process by showcasing a new design workflow.
Get To The Point! Problem-Based Curated Data Views To Augment Care For Critically Ill Patients
Minfan Zhang (University of Toronto, Toronto, Ontario, Canada)Daniel Ehrmann (Hospital for Sick Children, Toronto, Ontario, Canada)Mjaye Mazwi (Hospital for Sick Children, Toronto, Ontario, Canada)Danny Eytan (Hospital for Sick Children, Toronto, Ontario, Canada)Marzyeh Ghassemi (MIT, Cambridge, Massachusetts, United States)Fanny Chevalier (University of Toronto, Toronto, Ontario, Canada)
Electronic health records in critical care medicine offer unprecedented opportunities for clinical reasoning and decision making. Paradoxically, these data-rich environments have also resulted in clinical decision support systems (CDSSs) that fit poorly into clinical contexts, and increase health workers cognitive load. In this paper, we introduce a novel approach to designing CDSSs that are embedded in clinical workflows, by presenting problem-based curated data views tailored for problem-driven discovery, team communication, and situational awareness. We describe the design and evaluation of one such CDSS, In-Sight, that embodies our approach and addresses the clinical problem of monitoring critically ill pediatric patients. Our work is the result of a co-design process, further informed by empirical data collected through formal usability testing, focus groups, and a simulation study with domain experts. We discuss the potential and limitations of our approach, and share lessons learned in our iterative co-design process.
(Re)discovering the Physical Body Online: Strategies and Challenges to Approach Non-Cisgender Identity in Social Virtual Reality
Guo Freeman (Clemson University, Clemson, South Carolina, United States)Divine Maloney (Clemson University, Clemson, South Carolina, United States)Dane Acena (Clemson University, Clemson, South Carolina, United States)Catherine Barwulor (Clemson University, Clemson , South Carolina, United States)
The contemporary understanding of gender continues to highlight the complexity and variety of gender identities beyond a binary dichotomy regarding one’s biological sex assigned at birth. The emergence and popularity of various online social spaces also makes the digital presentation of gender even more sophisticated. In this paper, we use non-cisgender as an umbrella term to describe diverse gender identities that do not match people’s sex assigned at birth, including Transgender, Genderfluid, and Non-binary. We especially explore non-cisgender individuals’ identity practices and their challenges in novel social Virtual Reality (VR) spaces where they can present, express, and experiment their identity in ways that traditional online social spaces cannot provide. We provide one of the first empirical evidence of how social VR platforms may introduce new and novel phenomena and practices of approaching diverse gender identities online. We also contribute to re-conceptualizing technology-supported identity practices by highlighting the role of(re)discovering the physical body online and informing the design of the emerging metaverse for supporting diverse gender identities in the future.
Visualizing Instructions for Physical Training: Exploring Visual Cues to Support Movement Learning from Instructional Videos
Alessandra Semeraro (Uppsala University, Uppsala, Sweden)Laia Turmo Vidal (Uppsala University, Uppsala, Sweden)
Instructional videos for physical training have gained popularity in recent years among sport and fitness practitioners, due to the proliferation of affordable and ubiquitous forms of online training. Yet, learning movement this way poses challenges: lack of feedback and personalised instructions, and having to rely on personal imitation capacity to learn movements. We address some of these challenges by exploring visual cues’ potential to help people imitate movements from instructional videos. With a Research through Design approach, focused on strength training, we augmented an instructional video with different sets of visual cues: directional cues, body highlights, and metaphorical visualizations. We tested each set with ten practitioners over three recorded sessions, with follow-up interviews. Through thematic analysis, we derived insights on the effect of each set of cues for supporting movement learning. Finally, we generated design takeaways to inform future HCI work on visual cues for instructional training videos.
Understanding and Designing Avatar Biosignal Visualizations for Social Virtual Reality Entertainment
Sueyoon Lee (Centrum Wiskunde & Informatica (CWI), Amsterdam, Netherlands)Abdallah El Ali (Centrum Wiskunde & Informatica (CWI), Amsterdam, Netherlands)Maarten Wijntjes (Delft University of Technology, Delft, Netherlands)Pablo Cesar (Centrum Wiskunde & Informatica (CWI), Amsterdam, Netherlands)
Visualizing biosignals can be important for social Virtual Reality (VR), where avatar non-verbal cues are missing. While several biosignal representations exist, designing effective visualizations and understanding user perceptions within social VR entertainment remains unclear. We adopt a mixed-methods approach to design biosignals for social VR entertainment. Using survey (N=54), context-mapping (N=6), and co-design (N=6) methods, we derive four visualizations. We then ran a within-subjects study (N=32) in a virtual jazz-bar to investigate how heart rate (HR) and breathing rate (BR) visualizations, and signal rate, influence perceived avatar arousal, user distraction, and preferences. Findings show that skeuomorphic visualizations for both biosignals allow differentiable arousal inference; skeuomorphic and particles were least distracting for HR, whereas all were similarly distracting for BR; biosignal perceptions often depend on avatar relations, entertainment type, and emotion inference of avatars versus spaces. We contribute HR and BR visualizations, and considerations for designing social VR entertainment biosignal visualizations.
AvatAR: An Immersive Analysis Environment for Human Motion Data Combining Interactive 3D Avatars and Trajectories
Patrick Reipschläger (Autodesk Research, Toronto, Ontario, Canada)Frederik Brudy (Autodesk Research, Toronto, Ontario, Canada)Raimund Dachselt (Technische Universität Dresden, Dresden, Germany)Justin Matejka (Autodesk Research, Toronto, Ontario, Canada)George Fitzmaurice (Autodesk Research, Toronto, Ontario, Canada)Fraser Anderson (Autodesk Research, Toronto, Ontario, Canada)
Analysis of human motion data can reveal valuable insights about the utilization of space and interaction of humans with their environment. To support this, we present AvatAR, an immersive analysis environment for the in-situ visualization of human motion data, that combines 3D trajectories, virtual avatars of people’s movement, and a detailed representation of their posture. Additionally, we describe how to embed visualizations directly into the environment, showing what a person looked at or what surfaces they touched, and how the avatar’s body parts can be used to access and manipulate those visualizations. AvatAR combines an AR HMD with a tablet to provide both mid-air and touch interaction for system control, as well as an additional overview to help users navigate the environment. We implemented a prototype and present several scenarios to show that AvatAR can enhance the analysis of human motion data by making data not only explorable, but experienceable.
Switching Between Standard Pointing Methods with Current and Emerging Computer Form Factors
Margaret Jean. Foley (University of Waterloo, Waterloo, Ontario, Canada)Quentin Roy (University of Waterloo, Waterloo, Ontario, Canada)Da-Yuan Huang (Huawei Canada, Markham, Ontario, Canada)Wei Li (Huawei Canada, Markham, Ontario, Canada)Daniel Vogel (University of Waterloo, Waterloo, Ontario, Canada)
We investigate performance characteristics when switching between four pointing methods: absolute touch, absolute pen, relative mouse, and relative trackpad. The established "subtraction method" protocol used in mode-switching studies is extended to test pairs of methods and accommodate switch direction, multiple baselines, and controlling relative cursor position. A first experiment examines method switching on and around the horizontal surface of a tablet. Results find switching between pen and touch is fastest, and switching between relative and absolute methods incurs additional time penalty. A second experiment expands the investigation to an emerging foldable all-screen laptop form factor where switching also occurs on an angled surface and along a smoothly curved hinge. Results find switching between trackpad and touch is fastest, with all switching times generally higher. Our work contributes missing empirical evidence for switching performance using modern input methods, and our results can inform interaction design for current and emerging device form factors.
Evaluating Singing for Computer Input Using Pitch, Interval, and Melody
Graeme Zinck (University of Waterloo, Waterloo, Ontario, Canada)Daniel Vogel (University of Waterloo, Waterloo, Ontario, Canada)
In voice-based interfaces, non-verbal features represent a simple and underutilized design space for hands-free, language-agnostic interactions. We evaluate the performance of three fundamental types of voice-based musical interactions: pitch, interval, and melody. These interactions involve singing or humming a sequence of one or more notes. A 21-person study evaluates the feasibility and enjoyability of these interactions. The top performing participants were able to perform all interactions reasonably quickly (<5s) with average error rates between 1.3% and 8.6% after training. Others improved with training but still had error rates as high as 46% for pitch and melody interactions. The majority of participants found all tasks enjoyable. Using these results, we propose design considerations for using singing interactions as well as potential use cases for both standard computers and augmented reality glasses.
HeadWind: Enhancing Teleportation Experience in VR by Simulating Air Drag during Rapid Motion
Chun-Miao Tseng (National Taiwan University, Taipei, Taiwan)Po-Yu Chen (National Taiwan University, Taipei, Taiwan)Shih Chin Lin (National Taiwan University, Taipei, Taiwan)Yu-Wei Wang (National Taiwan University, Taipei, Taiwan)Yu-Hsin Lin (National Taiwan University, Taipei City, Taiwan)Mu-An Kuo (National Taiwan University, Taipei, Taiwan)Neng-Hao Yu (National Taiwan University of Science and Technology, Taipei, Taiwan)Mike Y.. Chen (National Taiwan University, Taipei, Taiwan)
Teleportation, which instantly moves users from their current location to the target location, has become the most popular locomotion technique in VR games. It enables fast navigation with reduced VR sickness but results in significantly reduced immersion. We present HeadWind, a novel approach to improve the experience of teleportation by simulating the haptic sensation of air drag when rapidly moving through the air in real life. Specifically, HeadWind modulates bursts of compressed air to the face and uses multiple nozzles to provide directional cues. To design the wearable device and to model airflow speed and duration for teleportation, we conducted three formative studies and a design session. User experience evaluation with 24 participants showed that HeadWind significantly improved realism, immersion, and enjoyment of teleportation in VR (p<.01) with large effect sizes (r>0.5), and was preferred by 96% of participants.
Shape-Haptics: Planar & Passive Force Feedback Mechanisms for Physical Interfaces
Clement Zheng (National University of Singapore, Singapore, Singapore, Singapore)Zhen Zhou Yong (National University of Singapore, Singapore, Singapore, Singapore)Hongnan Lin (Georgia Institute of Technology , Atlanta, Georgia, United States)HyunJoo Oh (Georgia Institute of Technology, Atlanta, Georgia, United States)Ching Chiuan Yen (National University of Singapore, Singapore, Singapore, Singapore)
We present Shape-Haptics, an approach for designers to rapidly design and fabricate passive force feedback mechanisms for physical interfaces. Such mechanisms are used in everyday interfaces and tools, and they are challenging to design. Shape-Haptics abstracts and broadens the haptic expression of this class of force feedback systems through 2D laser cut configurations that are simple to fabricate. They leverage the properties of polyoxymethylene plastic and comprise a compliant spring structure that engages with a sliding profile during tangible interaction. By shaping the sliding profile, designers can easily customize the haptic force feedback delivered by the mechanism. We provide a computational design sandbox to facilitate designers to explore and fabricate Shape-Haptics mechanisms. We also propose a series of applications that demonstrate the utility of Shape-Haptics in creating and customizing haptics for different physical interfaces.
Design Guidelines for Prompt Engineering Text-to-Image Generative Models
Vivian Liu (Columbia University, New York, New York, United States)Lydia B. Chilton (Columbia University, New York, New York, United States)
Text-to-image generative models are a new and powerful way to generate visual artwork. However, the open-ended nature of text as interaction is double-edged; while users can input anything and have access to an infinite range of generations, they also must engage in brute-force trial and error with the text prompt when the result quality is poor. We conduct a study exploring what prompt keywords and model hyperparameters can help produce coherent outputs. In particular, we study prompts structured to include subject and style keywords and investigate success and failure modes of these prompts. Our evaluation of 5493 generations over the course of five experiments spans 51 abstract and concrete subjects as well as 51 abstract and figurative styles. From this evaluation, we present design guidelines that can help people produce better outcomes from text-to-image generative models.