注目の論文一覧

各カテゴリ上位30論文までを表示しています

The ACM CHI Conference on Human Factors in Computing Systems (https://chi2022.acm.org/)

25
Personal Dream Informatics: A Self-Information Systems Model of Dream Engagement
Michael Jeffrey Daniel. Hoefer (University of Colorado Boulder, Boulder, Colorado, United States)Bryce E. Schumacher (University of Colorado Boulder, Boulder, Colorado, United States)Stephen Voida (University of Colorado Boulder, Boulder, Colorado, United States)
We present the research area of personal dream informatics: studying the self-information systems that support dream engagement and communication between the dreaming self and the wakeful self. Through a survey study of 281 individuals primarily recruited from an online community dedicated to dreaming, we develop a dream-information systems view of dreaming and dream tracking as a type of self-information system. While dream-information systems are characterized by diverse tracking processes, motivations, and outcomes, they are universally constrained by the ephemeral dreamset - the short period of time between waking up and rapid memory loss of dream experiences. By developing a system dynamics model of dreaming we highlight feedback loops that serve as high leverage points for technology designers, and suggest a variety of design considerations for crafting technology that best supports dream recall, dream tracking, and dreamwork for nightmare relief and personal development.
22
Augmented Reality and Robotics: A Survey and Taxonomy for AR-enhanced Human-Robot Interaction and Robotic Interfaces
Ryo Suzuki (University of Calgary, Calgary, Alberta, Canada)Adnan Karim (University of Calgary, Calgary, Alberta, Canada)Tian Xia (University of Calgary, Calgary, Alberta, Canada)Hooman Hedayati (University of Colorado Boulder, Boulder, Colorado, United States)Nicolai Marquardt (University College London, London, United Kingdom)
This paper contributes to a taxonomy of augmented reality and robotics based on a survey of 460 research papers. Augmented and mixed reality (AR/MR) have emerged as a new way to enhance human-robot interaction (HRI) and robotic interfaces (e.g., actuated and shape-changing interfaces). Recently, an increasing number of studies in HCI, HRI, and robotics have demonstrated how AR enables better interactions between people and robots. However, often research remains focused on individual explorations and key design strategies, and research questions are rarely analyzed systematically. In this paper, we synthesize and categorize this research field in the following dimensions: 1) approaches to augmenting reality; 2) characteristics of robots; 3) purposes and benefits; 4) classification of presented information; 5) design components and strategies for visual augmentation; 6) interaction techniques and modalities; 7) application domains; and 8) evaluation strategies. We formulate key challenges and opportunities to guide and inform future research in AR and robotics.
22
A Layered Authoring Tool for Stylized 3D animations
Jiaju Ma (Brown University, Providence, Rhode Island, United States)Li-Yi Wei (Adobe Research, San Jose, California, United States)Rubaiat Habib Kazi (Adobe Research, Seattle, Washington, United States)
Guided by the 12 principles of animation, stylization is a core 2D animation feature but has been utilized mainly by experienced animators. Although there are tools for stylizing 2D animations, creating stylized 3D animations remains a challenging problem due to the additional spatial dimension and the need for responsive actions like contact and collision. We propose a system that helps users create stylized casual 3D animations. A layered authoring interface is employed to balance between ease of use and expressiveness. Our surface level UI is a timeline sequencer that lets users add preset stylization effects such as squash and stretch and follow through to plain motions. Users can adjust spatial and temporal parameters to fine-tune these stylizations. These edits are propagated to our node-graph-based second level UI, in which the users can create custom stylizations after they are comfortable with the surface level UI. Our system also enables the stylization of interactions among multiple objects like force, energy, and collision. A pilot user study has shown that our fluid layered UI design allows for both ease of use and expressiveness better than existing tools.
19
"Chat Has No Chill": A Novel Physiological Interaction for Engaging Live Streaming Audiences
Raquel Breejon. Robinson (University of Saskatchewan, Saskatoon, Saskatchewan, Canada)Ricardo Rheeder (University of Saskatchewan, Saskatoon, Saskatchewan, Canada)Madison Klarkowski (University of Saskatchewan, Saskatoon, Saskatchewan, Canada)Regan L. Mandryk (University of Saskatchewan, Saskatoon, Saskatchewan, Canada)
Now more than ever, people are using online platforms to communicate. Twitch, the foremost platform for live game streaming, offers many communication modalities. However, the platform lacks representation of social cues and signals of the audience experience, which are innately present in live events. To address this, we present a technology probe that captures the audience energy and response in a game streaming context. We designed a game and integrated a custom-communication modality—Commons Sense—in which the audience members' heart rates are sensed via webcam, averaged, and fed into a video game to affect sound, lighting, and difficulty. We conducted an `in-the-wild' evaluation with four Twitch streamers and their audience members (N=55) to understand how these groups interacted through Commons Sense. Audience members and streamers indicated high levels of enjoyment and engagement with Commons Sense, suggesting the potential of physiological interaction as a beneficial communication tool in live streaming.
19
Eyes-Off Your fingers: Gradual Surface Haptic Feedback Improves Eyes-Free Touchscreen Interaction.
Corentin Bernard (Aix Marseille Univ, CNRS, PRISM, Marseille, France)Jocelyn Monnoyer (CNRS / Aix-Marseille University, Marseille, France)Sølvi Ystad (Aix Marseille Univ, CNRS, PRISM, Marseille, France)Michael Wiertlewski (TU Delft, Delft, Netherlands)
Moving a slider to set the music volume or control the air conditioning is a familiar task that requires little attention. However, adjusting a virtual slider on a featureless touchscreen is much more demanding and can be dangerous in situations such as driving. Here, we study how a gradual tactile feedback, provided by a haptic touchscreen, can replace visual cues. As users adjust a setting with their finger, they feel a continuously changing texture, which spatial frequency correlates to the value of the setting.We demonstrate that, after training with visual and auditory feedback, users are able to adjust a setting on a haptic touchscreen without looking at the screen, thereby reducing visual distraction. Every learning strategy yielded similar performance, suggesting an amodal integration. This study shows that surface haptics can provide intuitive and precise tuning possibilities for tangible interfaces on touchscreens.
15
immersivePOV: Filming How-To Videos with a Head-Mounted 360° Action Camera
Kevin Huang (University of Toronto, Toronto, Ontario, Canada)Jiannan Li (University of Toronto, Toronto, Ontario, Canada)Mauricio Sousa (University of Toronto, Toronto, Ontario, Canada)Tovi Grossman (University of Toronto, Toronto, Ontario, Canada)
How-to videos are often shot using camera angles that may not be optimal for learning motor tasks, with a prevalent use of third-person perspective. We present \textit{immersivePOV}, an approach to film how-to videos from an immersive first-person perspective using a head-mounted 360° action camera. immersivePOV how-to videos can be viewed in a Virtual Reality headset, giving the viewer an eye-level viewpoint with three Degrees of Freedom. We evaluated our approach with two everyday motor tasks against a baseline first-person perspective and a third-person perspective. In a between-subjects study, participants were assigned to watch the task videos and then replicate the tasks. Results suggest that immersivePOV reduced perceived cognitive load and facilitated task learning. We discuss how immersivePOV can also streamline the video production process for content creators. Altogether, we conclude that immersivePOV is an effective approach to film how-to videos for learners and content creators alike.
15
Shaping Textile Sliders: An Evaluation of Form Factors and Tick Marks for Textile Sliders
Oliver Nowak (RWTH Aachen University, Aachen, Germany)René Schäfer (RWTH Aachen University, Aachen, Germany)Anke Brocker (RWTH Aachen University, Aachen, Germany)Philipp Wacker (RWTH Aachen University, Aachen, Germany)Jan Borchers (RWTH Aachen University, Aachen, Germany)
Textile interfaces enable designers to integrate unobtrusive media and smart home controls into furniture such as sofas. While the technical aspects of such controllers have been the subject of numerous research projects, the physical form factor of these controls has received little attention so far. This work investigates how general design properties, such as overall slider shape, raised vs. recessed sliders, and number and layout of tick marks, affect users' preferences and performance. Our first user study identified a preference for certain design combinations, such as recessed, closed-shaped sliders. Our second user study included performance measurements on variations of the preferred designs from study 1, and took a closer look at tick marks. Tick marks supported orientation better than slider shape. Sliders with at least three tick marks were preferred, and performed well. Non-uniform, equally distributed tick marks reduced the movements users needed to orient themselves on the slider.
14
Exploring Spatial UI Transition Mechanisms with Head-Worn Augmented Reality
Feiyu Lu (Virginia Tech, Blacksburg, Virginia, United States)Yan Xu (Facebook, Redmond, Washington, United States)
Imagine in the future people comfortably wear augmented reality (AR) displays all day, how do we design interfaces that adapt to the contextual changes as people move around? In current operating systems, the majority of AR content defaults to staying at a fixed location until being manually moved by the users. However, this approach puts the burden of user interface (UI) transition solely on users. In this paper, we first ran a bodystorming design workshop to capture the limitations of existing manual UI transition approaches in spatially diverse tasks. Then we addressed these limitations by designing and evaluating three UI transition mechanisms with different levels of automation and controllability (low-effort manual, semi-automated, fully-automated). Furthermore, we simulated imperfect contextual awareness by introducing prediction errors with different costs to correct them. Our results provide valuable lessons about the trade-offs between UI automation levels, controllability, user agency, and the impact of prediction errors.
14
Janus Screen: A Screen with Switchable Projection Surfaces Using Wire Grid Polarizer
Wataru Yamada (NTT DOCOMO, INC., Tokyo, Japan)Sawa Korogi (NTT DOCOMO, INC., Tokyo, Japan)Keiichi Ochiai (NTT DOCOMO, INC., Tokyo, Japan)
In this paper, we present a novel screen system employing polarizers that allow switching of the projection surface to the front, rear, or both sides using only two projectors on one side. In this system, we propose a method that employs two projectors equipped with polarizers and a multi-layered screen comprising an anti-reflective plate, transparent screen, and wire grid polarizer. The multi-layered screen changes whether the projected image is shown on the front or rear side of the screen depending on the polarization direction of the incident light. Hence, the proposed method can project images on the front, rear, or both sides of the screen by projecting images from either or both projectors using polarizers. In addition, the proposed method can be easily deployed by simply attaching multiple optical films. We implement a prototype and confirm that the proposed method can selectively switch the projection surface.
14
Towards Understanding Diminished Reality
Yi Fei Cheng (Swarthmore College, Swarthmore, Pennsylvania, United States)Hang Yin (Carnegie Mellon University, Pittsburgh, Pennsylvania, United States)Yukang Yan (Tsinghua University, Beijing, China)Jan Gugenheimer (Institut Polytechnique de Paris, Paris, France)David Lindlbauer (Carnegie Mellon University, Pittsburgh, Pennsylvania, United States)
Diminished reality (DR) refers to the concept of removing content from a user's visual environment. While its implementation is becoming feasible, it is still unclear how users perceive and interact in DR-enabled environments and what applications it benefits. To address this challenge, we first conduct a formative study to compare user perceptions of DR and mediated reality effects (e.g., changing the color or size of target elements) in four example scenarios. Participants preferred removing objects through opacity reduction (i.e., the standard DR implementation) and appreciated mechanisms for maintaining a contextual understanding of diminished items (e.g., outlining). In a second study, we explore the user experience of performing tasks within DR-enabled environments. Participants selected which objects to diminish and the magnitude of the effects when performing two separate tasks (video viewing, assembly). Participants were comfortable with decreased contextual understanding, particularly for less mobile tasks. Based on the results, we define guidelines for creating general DR-enabled environments.
14
Virtual Transcendent Dream: Empowering People through Embodied Flying in VR
Pinyao Liu (Simon Fraser University, Surrey, British Columbia, Canada)Ekaterina R.. Stepanova (Simon Fraser University, Surrey, British Columbia, Canada)Alexandra Kitson (Simon Fraser University, Surrey, British Columbia, Canada)Thecla Schiphorst (Simon Fraser University, Vancouver, British Columbia, Canada)Bernhard E.. Riecke (Simon Fraser University, Vancouver, British Columbia, Canada)
Flying dreams have the potential to evoke a feeling of empowerment (or self-efficacy, confidence in our ability to succeed) and self-transcendent experience (STE), which have been shown to contribute to an individual’s overall well-being. However, these exceptional dreaming experiences remain difficult to induce at will. Inspired by the potential of Virtual Reality (VR) to support profound emotional experiences, we explored if a VR flying interface with more embodied self-motion cues could contribute to the benefits associated with flying dreams (i.e., STE and empowerment). Our results indicated that a flying interface with more self-motion cues indeed better supported STE and empowerment. We derived several design considerations: obscurity, extraordinary light and supportive setting. Our results contribute to the discourse around design guidelines for self-transcendence and empowerment in VR, which may further be applied to the improvement of mental well-being.
14
Prevalence and Salience of Problematic Microtransactions in Top-Grossing Mobile and PC Games: A Content Analysis of User Reviews
Elena Petrovskaya (University of York, York, United Kingdom)Sebastian Deterding (University of York, York, United Kingdom)David I. Zendle (University of York, York, North Yorkshire, United Kingdom)
Microtransactions have become a major monetisation model in digital games, shaping their design, impacting their player experience, and raising ethical concerns. Research in this area has chiefly focused on loot boxes. This begs the question whether other microtransactions might actually be more relevant and problematic for players. We therefore conducted a content analysis of negative player reviews (n=801) of top-grossing mobile and desktop games to determine which problematic microtransactions are most prevalent and salient for players. We found that problematic microtransactions with mobile games featuring more frequent and different techniques compared to desktop games. Across both, players minded issues related to fairness, transparency, and degraded user experience, supporting prior theoretical work, and importantly take issue with monetisation-driven design as such. We identify future research needs on why microtransactions in particular spark this critique, and which player communities it may be more or less representative of.
14
FlatMagic: Improving Flat Colorization through AI-driven Design for Digital Comic Professionals
Chuan Yan (George Mason University, Fairfax, Virginia, United States)John Joon Young. Chung (University of Michigan, Ann Arbor, Michigan, United States)Yoon Kiheon (Pusan National University, Pusan, Korea, Republic of)Yotam Gingold (George Mason University, Fairfax, Virginia, United States)Eytan Adar (University of Michigan, Ann Arbor, Michigan, United States)Sungsoo Ray Hong (George Mason University, Fairfax, Virginia, United States)
Creating digital comics involves multiple stages, some creative and some menial. For example, coloring a comic requires a labor-intensive stage known as 'flatting,' or masking segments of continuous color, as well as creative shading, lighting, and stylization stages. The use of AI can automate the colorization process, but early efforts have revealed limitations---technical and UX---to full automation. Via a formative study of professionals, we identify flatting as a bottleneck and key target of opportunity for human-guided AI-driven automation. Based on this insight, we built FlatMagic, an interactive, AI-driven flat colorization support tool for Photoshop. Our user studies found that using FlatMagic significantly reduced professionals' real and perceived effort versus their current practice. While participants effectively used FlatMagic, we also identified potential constraints in interactions with AI and partially automated workflows. We reflect on implications for comic-focused tools and the benefits and pitfalls of intermediate representations and partial automation in designing human-AI collaboration tools for professionals.
13
A Model Predictive Control Approach for Reach Redirection in Virtual Reality
Eric J. Gonzalez (Stanford University, Stanford, California, United States)Elyse D. Z.. Chase (Stanford University, Stanford, California, United States)Pramod Kotipalli (Stanford University, Stanford, California, United States)Sean Follmer (Stanford University, Stanford, California, United States)
Reach redirection is an illusion-based virtual reality (VR) interaction technique where a user’s virtual hand is shifted during a reach in order to guide their real hand to a physical location. Prior works have not considered the underlying sensorimotor processes driving redirection. In this work, we propose adapting a sensorimotor model for goal-directed reach to obtain a model for visually-redirected reach, specifically by incorporating redirection as a sensory bias in the state estimate used by a minimum jerk motion controller. We validate and then leverage this model to develop a Model Predictive Control (MPC) approach for reach redirection, enabling the real-time generation of spatial warping according to desired optimization criteria (e.g., redirection goals) and constraints (e.g., sensory thresholds). We illustrate this approach with two example criteria -- redirection to a desired point and redirection along a desired path -- and compare our approach against existing techniques in a user evaluation.
13
DreamStream: Immersive and Interactive spectating in VR
Balasaravanan Thoravi Kumaravel (UC Berkeley, Berkeley, California, United States)Andrew D. Wilson (Microsoft Research, Redmond, Washington, United States)
Today spectating and streaming virtual reality (VR) activities typically involves spectators viewing a 2D stream of the VR user’s view. Streaming 2D videos of the game play is popular and well-supported by platforms such as Twitch. However, the generic streaming of full 3D representations is less explored. Thus, while the VR player’s experience may be fully immersive, spectators are limited to 2D videos. This asymmetry lessens the overall experience for spectators, who themselves may be eager to spectate in VR. DreamStream puts viewers in the virtual environment of the VR application, allowing them to look “over the shoulder” of the VR player. Spectators can view streamed VR content immersively in 3D, independently explore the VR scene beyond what the VR player sees and ultimately cohabit the virtual environment alongside the VR player. For the VR player, DreamStream provides a spatial awareness of all their spectators. DreamStream retrofits and works with existing VR applications. We discuss the design and implementation of DreamStream, and carry out three qualitative informal evaluations. These evaluations shed light on the strengths and weakness of using DreamStream for the purpose of interactive spectating. Our participants found that DreamStream’s VR viewer interface offered increased immersion, and made it easier to communicate and interact with the VR player.
13
Logic Bonbon: Exploring Food as Computational Artifact
Jialin Deng (Monash University, Melbourne, Victoria, Australia)Patrick Olivier (Monash University, Melbourne, Victoria, Australia)Josh Andres (The Australian National University, Canberra, Australian Capital Territory, Australia)Kirsten Ellis (Monash University, Melbourne, Vic, Australia)Ryan Wee (Monash University, Melbourne, Victoria, Australia)Florian ‘Floyd’. Mueller (Monash University, Melbourne, VIC, Australia)
In recognition of food’s significant experiential pleasures, culinary practitioners and designers are increasingly exploring novel combinations of computing technologies and food. However, despite much creative endeavors, proposals and prototypes have so far largely maintained a traditional divide, treating food and technology as separate entities. In contrast, we present a “Research through Design” exploration of the notion of food as computational artifact: wherein food itself is the material of computation. We describe the Logic Bonbon, a dessert that can hydrodynamically regulate its flavor via a fluidic logic system. Through a study of experiencing the Logic Bonbon and reflection on our design practice, we offer a provisional account of how food as computational artifact can mediate new interactions through a novel approach to food-computation integration, that promotes an enriched future of Human-Food Interaction.
13
Hand Interfaces: Using Hands to Imitate Objects in AR/VR for Expressive Interactions
Siyou Pei (University of California, Los Angeles, Los Angeles, California, United States)Alexander Chen (University of California, Los Angeles, Los Angeles, California, United States)Jaewook Lee (University of Illinois at Urbana-Champaign, Urbana, Illinois, United States)Yang Zhang (University of California, Los Angeles, Los Angeles, California, United States)
Augmented reality (AR) and virtual reality (VR) technologies create exciting new opportunities for people to interact with computing resources and information. Less exciting is the need for holding hand controllers, which limits applications that demand expressive, readily available interactions. Prior research investigated freehand AR/VR input by transforming the user's body into an interaction medium. In contrast to previous work that has users' hands grasp virtual objects, we propose a new interaction technique that lets users' hands become virtual objects by imitating the objects themselves. For example, a thumbs-up hand pose is used to mimic a joystick. We created a wide array of interaction designs around this idea to demonstrate its applicability in object retrieval and interactive control tasks. Collectively, we call these interaction designs Hand Interfaces. From a series of user studies comparing Hand Interfaces against various baseline techniques, we collected quantitative and qualitative feedback, which indicates that Hand Interfaces are effective, expressive, and fun to use.
12
Supercharging Trial-and-Error for Learning Complex Software Applications
Damien Masson (Autodesk Research, Toronto, Ontario, Canada)Jo Vermeulen (Autodesk Research, Toronto, Ontario, Canada)George Fitzmaurice (Autodesk Research, Toronto, Ontario, Canada)Justin Matejka (Autodesk Research, Toronto, Ontario, Canada)
Despite an abundance of carefully-crafted tutorials, trial-and-error remains many people’s preferred way to learn complex software. Yet, approaches to facilitate trial-and-error (such as tooltips) have evolved very little since the 1980s. While existing mechanisms work well for simple software, they scale poorly to large feature-rich applications. In this paper, we explore new techniques to support trial-and-error in complex applications. We identify key benefits and challenges of trial-and-error, and introduce a framework with a conceptual model and design space. Using this framework, we developed three techniques: ToolTrack to keep track of trial-and-error progress; ToolTrip to go beyond trial-and-error of single commands by highlighting related commands that are frequently used together; and ToolTaste to quickly and safely try commands. We demonstrate how these techniques facilitate trial-and-error, as illustrated through a proof-of-concept implementation in the CAD software Fusion 360. We conclude by discussing possible scenarios and outline directions for future research on trial-and-error.
12
Understanding and Designing Avatar Biosignal Visualizations for Social Virtual Reality Entertainment
Sueyoon Lee (Centrum Wiskunde & Informatica (CWI), Amsterdam, Netherlands)Abdallah El Ali (Centrum Wiskunde & Informatica (CWI), Amsterdam, Netherlands)Maarten Wijntjes (Delft University of Technology, Delft, Netherlands)Pablo Cesar (Centrum Wiskunde & Informatica (CWI), Amsterdam, Netherlands)
Visualizing biosignals can be important for social Virtual Reality (VR), where avatar non-verbal cues are missing. While several biosignal representations exist, designing effective visualizations and understanding user perceptions within social VR entertainment remains unclear. We adopt a mixed-methods approach to design biosignals for social VR entertainment. Using survey (N=54), context-mapping (N=6), and co-design (N=6) methods, we derive four visualizations. We then ran a within-subjects study (N=32) in a virtual jazz-bar to investigate how heart rate (HR) and breathing rate (BR) visualizations, and signal rate, influence perceived avatar arousal, user distraction, and preferences. Findings show that skeuomorphic visualizations for both biosignals allow differentiable arousal inference; skeuomorphic and particles were least distracting for HR, whereas all were similarly distracting for BR; biosignal perceptions often depend on avatar relations, entertainment type, and emotion inference of avatars versus spaces. We contribute HR and BR visualizations, and considerations for designing social VR entertainment biosignal visualizations.
12
FingerX: Rendering Haptic Shape of Virtual Objects Augmented from Real Objects using Extendable and Withdrawable Supports on Fingers
Hsin-Ruey Tsai (National Chengchi University, Taipei, Taiwan)Chieh Tsai (National Chengchi University, Taipei, Taiwan)Yu-So Liao (National Chengchi University, Taipei, Taiwan)Yi-Ting Chiang (National Chengchi University, Taipei, Taiwan)Zhong-Yi Zhang (National Chengchi University, Taipei City, Taiwan)
Interacting with not only virtual but also real objects, or even virtual objects augmented by real objects becomes a trend of virtual reality (VR) interactions and is common in augmented reality (AR). However, current haptic shape rendering devices generally focus on feedback of virtual objects, and require the users to put down or take off those devices to perceive real objects. Therefore, we propose FingerX to render haptic shapes and enable users to touch, grasp and interact with virtual and real objects simultaneously. An extender on the fingertip extends to a corresponding height to support between the fingertip and the real objects or the hand, to render virtual shapes. A ring rotates and withdraws the extender behind the fingertip when touching real objects. By independently controlling four extenders and rings on each finger with the exception of the pinky finger, FingerX renders feedback in three common scenarios, including touching virtual objects augmented by real environments (e.g., a desk), grasping virtual objects augmented by real objects (e.g., a bottle) and grasping virtual objects in the hand. We conducted a shape recognition study to evaluate the recognition rates for these three scenarios and obtained an average recognition rate of 76.59% with shape visual feedback. We then performed a VR study to observe how users interact with virtual and real objects simultaneously and verify that FingerX significantly enhances VR realism, compared to current vibrotactile methods.
12
Barriers to Expertise in Citizen Science Games
Josh Aaron Miller (Northeastern University, Boston, Massachusetts, United States)Seth Cooper (Northeastern University, Boston, Massachusetts, United States)
Expertise-centric citizen science games (ECCSGs) can be powerful tools for crowdsourcing scientific knowledge production. However, to be effective these games must train their players on how to become experts, which is difficult in practice. In this study, we investigated the path to expertise and the barriers involved by interviewing players of three ECCSGs: Foldit, Eterna, and Eyewire. We then applied reflexive thematic analysis to generate themes of their experiences and produce a model of expertise and its barriers. We found expertise is constructed through a cycle of exploratory and social learning but prevented by instructional design issues. Moreover, exploration is slowed by a lack of polish to the game artifact, and social learning is disrupted by a lack of clear communication. Based on our analysis we make several recommendations for CSG developers, including: collaborating with professionals of required skill sets; providing social features and feedback systems; and improving scientific communication.
12
"I Didn't Know I Looked Angry": Characterizing Observed Emotion and Reported Affect at Work
Harmanpreet Kaur (University of Michigan, Ann Arbor, Michigan, United States)Daniel McDuff (Microsoft, Seattle, Washington, United States)Alex C. Williams (University of Tennessee, Knoxville, Knoxville, Tennessee, United States)Jaime Teevan (Microsoft, Redmond, Washington, United States)Shamsi Iqbal (Microsoft Research, Redmond, Washington, United States)
With the growing prevalence of affective computing applications, Automatic Emotion Recognition (AER) technologies have garnered attention in both research and industry settings. Initially limited to speech-based applications, AER technologies now include analysis of facial landmarks to provide predicted probabilities of a common subset of emotions (e.g., anger, happiness) for faces observed in an image or video frame. In this paper, we study the relationship between AER outputs and self-reports of affect employed by prior work, in the context of information work at a technology company. We compare the continuous observed emotion output from an AER tool to discrete reported affect obtained via a one-day combined tool-use and diary study (N=15). We provide empirical evidence showing that these signals do not completely align, and find that using additional workplace context only improves alignment up to 58.6%. These results suggest affect must be studied in the context it is being expressed, and observed emotion signal should not replace internal reported affect for affective computing applications.
12
O&O: A DIY toolkit for designing and rapid prototyping olfactory interfaces
Yuxuan Lei (Tsinghua University, Beijing, China)Qi Lu (Tsinghua University, Beijing, China)Yingqing Xu (Tsinghua University, Beijing, China)
Constructing olfactory interfaces on demand requires significant design proficiency and engineering effort. The absence of powerful and convenient tools that reduced innovation complexity posed obstacles for future research in the area. To address this problem, we proposed O&O, a modular olfactory interface DIY toolkit. The toolkit consists of: (1) a scent generation kit, a set of electronics and accessories that supported three common scent vaporization techniques; (2) a module construction kit, a set of primitive cardboard modules for assembling permutable functional structures; (3) a design manual, a step-by-step design thinking framework that directs the decision-making and prototyping process. We organized a formal workshop with 19 participants and four solo DIY trials to evaluate the capability of the toolkit, the overall user engagement, the creations in both sessions, and the iterative suggestions. Finally, design implications and future opportunities were discussed for further research.
12
ReCompFig: Designing Dynamically Reconfigurable Kinematic Devices Using Compliant Mechanisms and Tensioning Cables
Humphrey Yang (Carnegie Mellon University, Pittsburgh, Pennsylvania, United States)Tate Johnson (Carnegie Mellon University, Pittsburgh, Pennsylvania, United States)Ke Zhong (Carnegie Mellon University , Pittsburgh , Pennsylvania, United States)Dinesh K. Patel (Carnegie Mellon University , Pittsburgh , Pennsylvania, United States)Gina Olson (Carnegie Mellon University, Pittsburgh, Pennsylvania, United States)Carmel Majidi (Carnegie Mellon University, Pittsburgh, Pennsylvania, United States)Mohammad Islam (Materials Science and Engineering, Pittsburgh, Pennsylvania, United States)Lining Yao (Carnegie Mellon University, Pittsburgh, Pennsylvania, United States)
From creating input devices to rendering tangible information, the field of HCI is interested in using kinematic mechanisms to create human-computer interfaces. Yet, due to fabrication and design challenges, it is often difficult to create kinematic devices that are compact and have multiple reconfigurable motional degrees of freedom (DOFs) depending on the interaction scenarios. In this work, we combine compliant mechanisms (CMs) with tensioning cables to create dynamically reconfigurable kinematic mechanisms. The devices’ kinematics (DOFs) is enabled and determined by the layout of bendable rods. The additional cables function as on-demand motion constraints that can dynamically lock or unlock the mechanism’s DOFs as they are tightened or loosened. We provide algorithms and a design tool prototype to help users design such kinematic devices. We also demonstrate various HCI use cases including a kinematic haptic display, a haptic proxy, and a multimodal input device.
12
Does Dynamically Drawn Text Improve Learning? Investigating the Effect of Text Presentation Styles in Video Learning
Ashwin Ram (National University of Singapore, Singapore, Singapore)Shengdong Zhao (National University of Singapore, Singapore, Singapore)
Dynamically drawn content (e.g., handwritten text) in learning videos is believed to improve users’ engagement and learning over static powerpoint-based ones. However, evidence from existing literature is inconclusive. With the emergence of Optical Head-Mounted Displays (OHMDs), recent work has shown that video learning can be adapted for on-the-go scenarios. To better understand the role of dynamic drawing, we decoupled dynamically drawn text into two factors (font style and motion of appearance) and studied their impact on learning performance under two usage scenarios (while seated with desktop and walking with OHMD). We found that although letter-traced text was more engaging for some users, most preferred learning with typeface text that displayed the entire word at once and achieved better recall (46.7% higher), regardless of the usage scenarios. Insights learned from the studies can better inform designers on how to present text in videos for ubiquitous access.
12
Adaptive Empathy Learning Support in Peer Review Scenarios
Thiemo Wambsganss (University of St. Gallen, Sankt Gallen, Switzerland)Matthias Soellner (University of Kassel, Kassel, Germany)Kenneth R. Koedinger (Carnegie Mellon University, Pittsburgh, Pennsylvania, United States)Jan Marco Leimeister (University of St. Gallen, St. Gallen, Switzerland)
Advances in Natural Language Processing offer techniques to detect the empathy level in texts. To test if individual feedback on certain students’ empathy level in their peer review writing process will help them to write more empathic reviews, we developed ELEA, an adaptive writing support system that provides students with feedback on the cognitive and emotional empathy structures. We compared ELEA to a proven empathy support tool in a peer review setting with 119 students. We found students using ELEA wrote more empathic peer reviews with a higher level of emotional empathy compared to the control group. The high perceived skill learning, the technology acceptance, and the level of enjoyment provide promising results to use such an approach as a feedback application in traditional learning settings. Our results indicate that learning applications based on NLP are able to foster empathic writing skills of students in peer review scenarios.
12
InfraredTags: Embedding Invisible AR Markers and Barcodes Using Low-Cost, Infrared-Based 3D Printing and Imaging Tools
Mustafa Doga Dogan (MIT CSAIL, Cambridge, Massachusetts, United States)Ahmad Taka (MIT CSAIL, Cambridge, Massachusetts, United States)Michael Lu (MIT CSAIL, Cambridge, Massachusetts, United States)Yunyi Zhu (MIT CSAIL, Cambridge, Massachusetts, United States)Akshat Kumar (MIT CSAIL, Cambridge, Massachusetts, United States)Aakar Gupta (Facebook Inc, Redmond, Washington, United States)Stefanie Mueller (MIT CSAIL, Cambridge, Massachusetts, United States)
Existing approaches for embedding unobtrusive tags inside 3D~objects require either complex fabrication or high-cost imaging equipment. We present InfraredTags, which are 2D markers and barcodes imperceptible to the naked eye that can be 3D printed as part of objects, and detected rapidly by low-cost near-infrared cameras. We achieve this by printing objects from an infrared-transmitting filament, which infrared cameras can see through, and by having air gaps inside for the tag's bits, which appear at a different intensity in the infrared image. We built a user interface that facilitates the integration of common tags (QR codes, ArUco markers) with the object geometry to make them 3D printable as InfraredTags. We also developed a low-cost infrared imaging module that augments existing mobile devices and decodes tags using our image processing pipeline. Our evaluation shows that the tags can be detected with little near-infrared illumination (0.2lux) and from distances as far as 250cm. We demonstrate how our method enables various applications, such as object tracking and embedding metadata for augmented reality and tangible interactions.
11
Switching Between Standard Pointing Methods with Current and Emerging Computer Form Factors
Margaret Jean. Foley (University of Waterloo, Waterloo, Ontario, Canada)Quentin Roy (University of Waterloo, Waterloo, Ontario, Canada)Da-Yuan Huang (Huawei Canada, Markham, Ontario, Canada)Wei Li (Huawei Canada, Markham, Ontario, Canada)Daniel Vogel (University of Waterloo, Waterloo, Ontario, Canada)
We investigate performance characteristics when switching between four pointing methods: absolute touch, absolute pen, relative mouse, and relative trackpad. The established "subtraction method" protocol used in mode-switching studies is extended to test pairs of methods and accommodate switch direction, multiple baselines, and controlling relative cursor position. A first experiment examines method switching on and around the horizontal surface of a tablet. Results find switching between pen and touch is fastest, and switching between relative and absolute methods incurs additional time penalty. A second experiment expands the investigation to an emerging foldable all-screen laptop form factor where switching also occurs on an angled surface and along a smoothly curved hinge. Results find switching between trackpad and touch is fastest, with all switching times generally higher. Our work contributes missing empirical evidence for switching performance using modern input methods, and our results can inform interaction design for current and emerging device form factors.
11
Interpolating Happiness: Understanding the Intensity Gradations of Face Emojis Across Cultures
Andrey Krekhov (University of Duisburg-Essen, Duisburg, NRW, Germany)Katharina Emmerich (University of Duisburg-Essen, Duisburg, NRW, Germany)Johannes Fuchs (University of Konstanz, Konstanz, Germany)Jens Harald. Krueger (University of Duisburg-Essen, Duisburg, NRW, Germany)
We frequently utilize face emojis to express emotions in digital communication. But how wholly and precisely do such pictographs sample the emotional spectrum, and are there gaps to be closed? Our research establishes emoji intensity scales for seven basic emotions: happiness, anger, disgust, sadness, shock, annoyance, and love. In our survey (N = 1195), participants worldwide assigned emotions and intensities to 68 face emojis. According to our results, certain feelings, such as happiness or shock, are visualized by manifold emojis covering a broad spectrum of intensities. Other feelings, such as anger, have limited and only very intense representative visualizations. We further emphasize that the cultural background influences emojis' perception: for instance, linear-active cultures (e.g., UK, Germany) rate the intensity of such visualizations higher than multi-active (e.g., Brazil, Russia) or reactive cultures (e.g., Indonesia, Singapore). To summarize, our manuscript promotes future research on more expressive, culture-aware emoji design.
11
SkyPort: Investigating 3D Teleportation Methods in Virtual Environments
Andrii Matviienko (Technical University of Darmstadt, Darmstadt, Germany)Florian Müller (TU Darmstadt, Darmstadt, Germany)Martin Schmitz (Technical University of Darmstadt, Darmstadt, Germany)Marco Fendrich (TU Darmstadt, Darmstadt, Germany)Max Mühlhäuser (TU Darmstadt, Darmstadt, Germany)
Teleportation has become the de facto standard of locomotion in Virtual Reality (VR) environments. However, teleportation with parabolic and linear target aiming methods is restricted to horizontal 2D planes and it is unknown how they transfer to the 3D space. In this paper, we propose six 3D teleportation methods in virtual environments based on the combination of two existing aiming methods (linear and parabolic) and three types of transitioning to a target (instant, interpolated and continuous). To investigate the performance of the proposed teleportation methods, we conducted a controlled lab experiment (N = 24) with a mid-air coin collection task to assess accuracy, efficiency and VR sickness. We discovered that the linear aiming method leads to faster and more accurate target selection. Moreover, a combination of linear aiming and instant transitioning leads to the highest efficiency and accuracy without increasing VR sickness.
11
Causality-preserving Asynchronous Reality
Andreas Rene. Fender (ETH Zürich, Zurich, Switzerland)Christian Holz (ETH Zürich, Zurich, Switzerland)
Mixed Reality is gaining interest as a platform for collaboration and focused work to a point where it may supersede current office settings in future workplaces. At the same time, we expect that interaction with physical objects and face-to-face communication will remain crucial for future work environments, which is a particular challenge in fully immersive Virtual Reality. In this work, we reconcile those requirements through a user's individual Asynchronous Reality, which enables seamless physical interaction across time. When a user is unavailable, e.g., focused on a task or in a call, our approach captures co-located or remote physical events in real-time, constructs a causality graph of co-dependent events, and lets immersed users revisit them at a suitable time in a causally accurate way. Enabled by our system AsyncReality, we present a workplace scenario that includes walk-in interruptions during a person's focused work, physical deliveries, and transient spoken messages. We then generalize our approach to a use-case agnostic concept and system architecture. We conclude by discussing the implications of an Asynchronous Reality for future offices.
11
Paper Trail: An Immersive Authoring System for Augmented Reality Instructional Experiences
Shwetha Rajaram (University of Michigan, Ann Arbor, Michigan, United States)Michael Nebeling (University of Michigan, Ann Arbor, Michigan, United States)
Prior work has demonstrated augmented reality's benefits to education, but current tools are difficult to integrate with traditional instructional methods. We present Paper Trail, an immersive authoring system designed to explore how to enable instructors to create AR educational experiences, leaving paper at the core of the interaction and enhancing it with various forms of digital media, animations for dynamic illustrations, and clipping masks to guide learning. To inform the system design, we developed five scenarios exploring the benefits that hand-held and head-worn AR can bring to STEM instruction and developed a design space of AR interactions enhancing paper based on these scenarios and prior work. Using the example of an AR physics handout, we assessed the system's potential with PhD-level instructors and its usability with XR design experts. In an elicitation study with high-school teachers, we study how Paper Trail could be used and extended to enable flexible use cases across various domains. We discuss benefits of immersive paper for supporting diverse student needs and challenges for making effective use of AR for learning.
11
Designing Visuo-Haptic Illusions with Proxies in Virtual Reality: Exploration of Grasp, Movement Trajectory and Object Mass
Martin Feick (Saarland Informatics Campus, Saarbrücken, Germany)Kora Persephone. Regitz (Saarland Informatics Campus, Saarbrücken, Germany)Anthony Tang (University of Toronto, Toronto, Ontario, Canada)Antonio Krüger (DFKI, Saarland Informatics Campus, Saarbrücken, Germany)
Visuo-haptic illusions are a method to expand proxy-based interactions in VR by introducing unnoticeable discrepancies between the virtual and real-world. Yet, how different design variables affect the illusions with proxies is still unclear. To unpack a subset of variables, we conducted two user studies with 48 participants to explore the impact of (1) different grasping types and movement trajectories, and (2) different grasping types and object masses on the discrepancy which may be introduced. Our Bayes analysis suggests that grasping types and object masses (≤ 500 g) did not noticeably affect the discrepancy, but for movement trajectory, results were inconclusive. Further, we identified a significant difference between un-/restricted movement trajectories. Our data shows considerable differences in participants’ proprioceptive accuracy, which seem to correlate with their prior VR experience. Finally, we illustrate the impact of our key findings on the visuo-haptic illusion design process by showcasing a new design workflow.
11
Mobile-Friendly Content Design for MOOCs: Challenges, Requirements, and Design Opportunities
Jeongyeon Kim (KAIST, Daejeon, Korea, Republic of)Yubin Choi (KAIST, Daejeon, Korea, Republic of)Meng Xia (KAIST, Daejeon, Korea, Republic of)Juho Kim (KAIST, Daejeon, Korea, Republic of)
Most video-based learning content is designed for desktops without considering mobile environments. We (1) investigate the gap between mobile learners’ challenges and video engineers’ considerations using mixed methods and (2) provide design guidelines for creating mobile-friendly MOOC videos. To uncover learners’ challenges, we conducted a survey (n=134) and interviews (n=21), and evaluated the mobile adequacy of current MOOCs by analyzing 41,722 video frames from 101 video lectures. Interview results revealed low readability and situationally-induced impairments as major challenges. The content analysis showed a low guideline compliance rate for key design factors. We then interviewed 11 video production engineers to investigate design factors they mainly consider. The engineers mainly focus on the size and amount of content while lacking consideration for color, complex images, and situationally-induced impairments. Finally, we present and validate guidelines for designing mobile-friendly MOOCs, such as providing adaptive and customizable visual design and context-aware accessibility support.
11
AvatAR: An Immersive Analysis Environment for Human Motion Data Combining Interactive 3D Avatars and Trajectories
Patrick Reipschläger (Autodesk Research, Toronto, Ontario, Canada)Frederik Brudy (Autodesk Research, Toronto, Ontario, Canada)Raimund Dachselt (Technische Universität Dresden, Dresden, Germany)Justin Matejka (Autodesk Research, Toronto, Ontario, Canada)George Fitzmaurice (Autodesk Research, Toronto, Ontario, Canada)Fraser Anderson (Autodesk Research, Toronto, Ontario, Canada)
Analysis of human motion data can reveal valuable insights about the utilization of space and interaction of humans with their environment. To support this, we present AvatAR, an immersive analysis environment for the in-situ visualization of human motion data, that combines 3D trajectories, virtual avatars of people’s movement, and a detailed representation of their posture. Additionally, we describe how to embed visualizations directly into the environment, showing what a person looked at or what surfaces they touched, and how the avatar’s body parts can be used to access and manipulate those visualizations. AvatAR combines an AR HMD with a tablet to provide both mid-air and touch interaction for system control, as well as an additional overview to help users navigate the environment. We implemented a prototype and present several scenarios to show that AvatAR can enhance the analysis of human motion data by making data not only explorable, but experienceable.
11
Mouth Haptics in VR using a Headset Ultrasound Phased Array
Vivian Shen (Carnegie Mellon University, Pittsburgh, Pennsylvania, United States)Craig Shultz (Carnegie Mellon University, Pittsburgh, Pennsylvania, United States)Chris Harrison (Carnegie Mellon University, Pittsburgh, Pennsylvania, United States)
Today’s consumer virtual reality (VR) systems offer limited haptic feedback via vibration motors in handheld controllers. Rendering haptics to other parts of the body is an open challenge, especially in a practical and consumer-friendly manner. The mouth is of particular interest, as it is a close second in tactile sensitivity to the fingertips, offering a unique opportunity to add fine-grained haptic effects. In this research, we developed a thin, compact, beamforming array of ultrasonic transducers, which can render haptic effects onto the mouth. Importantly, all components are integrated into the headset, meaning the user does not need to wear an additional accessory, or place any external infrastructure in their room. We explored several effects, including point impulses, swipes, and persistent vibrations. Our haptic sensations can be felt on the lips, teeth and tongue, which can be incorporated into new and interesting VR experiences.
11
Predicting Opportune Moments to Deliver Notifications in Virtual Reality
Kuan-Wen Chen (National Yang Ming Chiao Tung University, Hsinchu, Taiwan)Yung-Ju Chang (National Yang Ming Chiao Tung University, Hsinchu, Taiwan)Liwei Chan (National Yang Ming Chiao Tung University, Hsinchu, Taiwan)
Virtual reality (VR) has increasingly been used in many areas, and the need to deliver notifications in VR is also expected to increase accordingly. However, untimely interruptions could largely impact the experience in VR. Identifying opportune times to deliver notifications to users allows for notifications to be scheduled in a way that minimizes disruption. We conducted a study to investigate the use of sensor data available on an off-the-shelf VR device and additional contextual information, including current activity and engagement of users, to predict opportune moments for sending notifications using deep learning models. Our analysis shows that using mainly sensor features could achieve 72% recall, 71% precision and 0.86 area under receiver operating characteristic (AUROC); performance can be further improved to 81% recall, 82% precision, and 0.93 AUROC if information about activity and summarized user engagement is included.
11
(Re)discovering the Physical Body Online: Strategies and Challenges to Approach Non-Cisgender Identity in Social Virtual Reality
Guo Freeman (Clemson University, Clemson, South Carolina, United States)Divine Maloney (Clemson University, Clemson, South Carolina, United States)Dane Acena (Clemson University, Clemson, South Carolina, United States)Catherine Barwulor (Clemson University, Clemson , South Carolina, United States)
The contemporary understanding of gender continues to highlight the complexity and variety of gender identities beyond a binary dichotomy regarding one’s biological sex assigned at birth. The emergence and popularity of various online social spaces also makes the digital presentation of gender even more sophisticated. In this paper, we use non-cisgender as an umbrella term to describe diverse gender identities that do not match people’s sex assigned at birth, including Transgender, Genderfluid, and Non-binary. We especially explore non-cisgender individuals’ identity practices and their challenges in novel social Virtual Reality (VR) spaces where they can present, express, and experiment their identity in ways that traditional online social spaces cannot provide. We provide one of the first empirical evidence of how social VR platforms may introduce new and novel phenomena and practices of approaching diverse gender identities online. We also contribute to re-conceptualizing technology-supported identity practices by highlighting the role of(re)discovering the physical body online and informing the design of the emerging metaverse for supporting diverse gender identities in the future.
11
Interaction with Touch-Sensitive Knitted Fabrics: User Perceptions and Everyday Use Experiments
Denisa Qori. McDonald (Drexel University, Philadelphia, Pennsylvania, United States)Shruti Mahajan (Worcester Polytechnic Institute, Worcester, Massachusetts, United States)Richard Vallett (Drexel University, Philadelphia, Pennsylvania, United States)Genevieve Dion (Westphal College of Media Arts & Design, Philadelphia, Pennsylvania, United States)Ali Shokoufandeh (College of Computing and Informatics, Philadelphia, Pennsylvania, United States)Erin Solovey (Worcester Polytechnic Institute, Worcester, Massachusetts, United States)
Recent work has investigated the construction of touch-sensitive knitted fabrics, capable of being manufactured at scale, and having only two connections to external hardware. Additionally, several sensor design patterns and application prototypes have been introduced. Our aim is to start shaping the future of this technology according to user expectations. Through a formative focus group study, we explore users' views of using these fabrics in different contexts and discuss potential concerns and application areas. Subsequently, we take steps toward addressing relevant questions, by first providing design guidelines for application designers. Furthermore, in one user study, we demonstrate that it is possible to distinguish different swipe gestures and identify accidental contact with the sensor, a common occurrence in everyday life. We then present experiments investigating the effect of stretching and laundering of the sensors on their resistance, providing insights about considerations necessary to include in computational models.
10
In-Depth Mouse: Integrating Desktop Mouse into Virtual Reality
Qian Zhou (Autodesk Research, Toronto, Ontario, Canada)George Fitzmaurice (Autodesk Research, Toronto, Ontario, Canada)Fraser Anderson (Autodesk Research, Toronto, Ontario, Canada)
Virtual Reality (VR) has potential for productive knowledge work, however, midair pointing with controllers or hand gestures does not offer the precision and comfort of traditional 2D mice. Directly integrating mice into VR is difficult as selecting targets in a 3D space is negatively impacted by binocular rivalry, perspective mismatch, and improperly calibrated control-display (CD) gain. To address these issues, we developed Depth-Adaptive Cursor, a 2D-mouse driven pointing technique for 3D selection with depth-adaptation that continuously interpolates the cursor depth by inferring what users intend to select based on the cursor position, the viewpoint, and the selectable objects. Depth-Adaptive Cursor uses a novel CD gain tool to compute a usable range of CD gains for general mouse-based pointing in VR. A user study demonstrated that Depth-Adaptive Cursor significantly improved performance compared with an existing mouse-based pointing technique without depth-adaption in terms of time (21.2%), error (48.3%), perceived workload, and user satisfaction.
10
ReLive: Bridging In-Situ and Ex-Situ Visual Analytics for Analyzing Mixed Reality User Studies
Sebastian Hubenschmid (University of Konstanz, Konstanz, Germany)Jonathan Wieland (University of Konstanz, Konstanz, Germany)Daniel Immanuel. Fink (University of Konstanz, Konstanz, Germany)Andrea Batch (University of Maryland, College Park, College Park, Maryland, United States)Johannes Zagermann (University of Konstanz, Konstanz, Germany)Niklas Elmqvist (University of Maryland, College Park, College Park, Maryland, United States)Harald Reiterer (University of Konstanz, Konstanz, Germany)
The nascent field of mixed reality is seeing an ever-increasing need for user studies and field evaluation, which are particularly challenging given device heterogeneity, diversity of use, and mobile deployment. Immersive analytics tools have recently emerged to support such analysis in situ, yet the complexity of the data also warrants an ex-situ analysis using more traditional non-immersive visual analytics setups. To bridge the gap between both approaches, we introduce ReLive: a mixed-immersion visual analytics framework for exploring and analyzing mixed reality user studies. ReLive combines an in-situ virtual reality view with a complementary ex-situ desktop view. While the virtual reality view allows users to relive interactive spatial recordings replicating the original study, the synchronized desktop view provides a familiar interface for analyzing aggregated data. We validated our concepts in a two-step evaluation consisting of a design walkthrough and an empirical expert user study.
10
It's Touching: Understanding Touch-Affect Association in Shape-Change with Kinematic Features
Feng Feng (University of Bristol, Bristol, United Kingdom)Dan Bennett (University of Bristol, Bristol, Bristol, United Kingdom)Zhijun Fan (Shandong university, Jinan, Shandong, China)Oussama Metatla (University of Bristol, Bristol, United Kingdom)
With the proliferation of shape-change research in affective computing, there is a need to deepen understandings of affective responses to shape-change display. Little research has focused on affective reactions to tactile experiences in shape-change, particularly in the absence of visual information. It is also rare to study response to the shape-change as it unfolds, isolated from a final shape-change outcome. We report on two studies on touch-affect associations, using the crossmodal ``Bouba-Kiki'' paradigm, to understand affective responses to shape-change as it unfolds. We investigate experiences with a shape-change gadget, as it moves between rounded (``Bouba'') and spiky (``Kiki'') forms. We capture affective responses via the circumplex model, and use a motion analysis approach to understand the certainty of these responses. We find that touch-affect associations are influenced by both the size and the frequency of the shape-change and may be modality-dependent, and that certainty in affective associations is influenced by association-consistency.
10
TapGazer: Text Entry with Finger Tapping and Gaze-directed Word Selection
Zhenyi He (New York University, New York, New York, United States)Christof Lutteroth (University of Bath, Bath, United Kingdom)Ken Perlin (New York University, New York, New York, United States)
While using VR, efficient text entry is a challenge: users cannot easily locate standard physical keyboards, and keys are often out of reach, e.g.\ when standing. We present TapGazer, a text entry system where users type by tapping their fingers in place. Users can tap anywhere as long as the identity of each tapping finger can be detected with sensors. Ambiguity between different possible input words is resolved by selecting target words with gaze. If gaze tracking is unavailable, ambiguity is resolved by selecting target words with additional taps. We evaluated TapGazer for seated and standing VR: seated novice users using touchpads as tap surfaces reached 44.81 words per minute (WPM), 79.17% of their QWERTY typing speed. Standing novice users tapped on their thighs with touch-sensitive gloves, reaching 45.26 WPM (71.91%). We analyze TapGazer with a theoretical performance model and discuss its potential for text input in future AR scenarios.
10
ReflecTouch: Detecting Grasp Posture of Smartphone Using Corneal Reflection Images
Xiang Zhang (Keio University, Yokohama City, Japan)Kaori Ikematsu (Yahoo Japan Corporation, Tokyo, Japan)Kunihiro Kato (Tokyo University of Technology, Tokyo, Japan)Yuta Sugiura (Keio University, Yokohama City, Japan)
By sensing how a user is holding a smartphone, adaptive user interfaces are possible such as those that automatically switch the displayed content and position of graphical user interface (GUI) components following how the phone is being held. We propose ReflecTouch, a novel method for detecting how a smartphone is being held by capturing images of the smartphone screen reflected on the cornea with a built-in front camera. In these images, the areas where the user places their fingers on the screen appear as shadows, which makes it possible to estimate the grasp posture. Since most smartphones have a front camera, this method can be used regardless of the device model; in addition, no additional sensor or hardware is required. We conducted data collection experiments to verify the classification accuracy of the proposed method for six different grasp postures, and the accuracy was 85%.
10
"Your Eyes Say You Have Used This Password Before": Identifying Password Reuse from Gaze Behavior and Keystroke Dynamics
Yasmeen Abdrabou (Bundeswehr University Munich, Munich, Bayern, Germany)Johannes Schütte (Bundeswehr University Munich, Munich, Germany)Ahmed Shams (German University in Cairo, Cairo, Egypt)Ken Pfeuffer (Aarhus University, Aarhus, Denmark)Daniel Buschek (University of Bayreuth, Bayreuth, Germany)Mohamed Khamis (University of Glasgow, Glasgow, United Kingdom)Florian Alt (Bundeswehr University Munich, Munich, Germany)
A significant drawback of text passwords for end-user authentication is password reuse. We propose a novel approach to detect password reuse by leveraging gaze as well as typing behavior and study its accuracy. We collected gaze and typing behavior from 49 users while creating accounts for 1) a webmail client and 2) a news website. While most participants came up with a new password, 32% reported having reused an old password when setting up their accounts. We then compared different ML models to detect password reuse from the collected data. Our models achieve an accuracy of up to 87.7% in detecting password reuse from gaze, 75.8% accuracy from typing, and 88.75% when considering both types of behavior. We demonstrate that \revised{using gaze, password} reuse can already be detected during the registration process, before users entered their password. Our work paves the road for developing novel interventions to prevent password reuse.
10
Prediction for Retrospection: Integrating Algorithmic Stress Prediction into Personal Informatics Systems for College Students' Mental Health
Taewan Kim (KAIST, Daejeon, Korea, Republic of)Haesoo Kim (KAIST, Daejeon, Korea, Republic of)Ha Yeon Lee (Seoul National University, Seoul, Korea, Republic of)Hwarang Goh (Inha University, Incheon, Korea, Republic of)Shakhboz Abdigapporov (Inha University, Michuhol-gu, Incheon, Korea, Republic of)Mingon Jeong (Hanyang University, Seoul, Korea, Republic of)Hyunsung Cho (Carnegie Mellon University, Pittsburgh, Pennsylvania, United States)Kyungsik Han (Hanyang University, Seoul, Korea, Republic of)Youngtae Noh (KENTECH, Naju-si, Jeollanam-do, Korea, Republic of)Sung-Ju Lee (KAIST, Daejeon, Korea, Republic of)Hwajung Hong (KAIST, Deajeon, Korea, Republic of)
Reflecting on stress-related data is critical in addressing one’s mental health. Personal Informatics (PI) systems augmented by algorithms and sensors have become popular ways to help users collect and reflect on data about stress. While prediction algorithms in the PI systems are mainly for diagnostic purposes, few studies examine how the explainability of algorithmic prediction can support user-driven self-insight. To this end, we developed MindScope, an algorithm-assisted stress management system that determines user stress levels and explains how the stress level was computed based on the user's everyday activities captured by a smartphone. In a 25-day field study conducted with 36 college students, the prediction and explanation supported self-reflection, a process to re-establish preconceptions about stress by identifying stress patterns and recalling past stress levels and patterns that led to coping planning. We discuss the implications of exploiting prediction algorithms that facilitate user-driven retrospection in PI systems.
10
Enabling Tangible Interaction on Non-touch Displays with Optical Mouse Sensor and Visible Light Communication
Yihui Yan (ShanghaiTech University, Shanghai, China)Zezhe Huang (ShanghaiTech University, Shanghai, China)Feiyang Xudu (ShanghaiTech University, Shanghai, China)Zhice Yang (ShanghaiTech University, Shanghai, China)
This paper presents Centaur, an input system that enables tangible interaction on displays, e.g., untouchable computer monitors. Centaur’s tangibles are built from low-cost optical mouse sensors, or can alternatively be emulated by commercial optical mice already available. They are trackable when put on the display, rendering a real-time and high-precision tangible interface. Even for ordinary personal computers, enabling Centaur requires no new hardware and installation burden. Centaur’s cost-effectiveness and wide availability open up new opportunities for tangible user interface (TUI) users and practitioners. Centaur’s key innovation lies in its tracking method. It embeds high-frequency light signals into different portions of the display content as location beacons. When the tangibles are put on the screen, they are able to sense the light signals with their optical mouse sensors, and thus determine the locations accordingly. We develop four applications to showcase the potential usage of Centaur.
10
OVRlap: Perceiving Multiple Locations Simultaneously to Improve Interaction in VR
Jonas Schjerlund (University of Copenhagen, Copenhagen, Denmark)Kasper Hornbæk (University of Copenhagen, Copenhagen, Denmark)Joanna Bergström (University of Copenhagen, Copenhagen, Denmark)
We introduce OVRlap, a VR interaction technique that lets the user perceive multiple places simultaneously from a first-person perspective. OVRlap achieves this by overlapping viewpoints. At any time, only one viewpoint is active, meaning that the user may interact with objects therein. Objects seen from the active viewpoint are opaque, whereas objects seen from passive viewpoints are transparent. This allows users to perceive multiple locations at once and easily switch to the one in which they want to interact. We compare OVRlap and a single-viewpoint technique in a study where 20 participants complete object-collection and monitoring tasks. We find that participants are significantly faster and move their head significantly less with OVRlap in both tasks. We propose how the technique might be improved through automated switching of the active viewpoint and intelligent viewpoint rendering.
10
Paracentral and near-peripheral visualizations: Towards attention-maintaining secondary information presentation on OHMDs during in-person social interactions
Nuwan Nanayakkarawasam Peru Kandage. Janaka (National University of Singapore, Singapore, Singapore)Chloe Haigh (National University of Singapore, Singapore, Singapore)Hyeongcheol Kim (National University of Singapore, Singapore , Singapore)Shan Zhang (National University of Singapore, Singapore, Singapore)Shengdong Zhao (National University of Singapore, Singapore, Singapore)
Optical see-through Head-Mounted Displays (OST HMDs, OHMDs) are known to facilitate situational awareness while accessing secondary information. However, information displayed on OHMDs can cause attention shifts, which distract users from natural social interactions. We hypothesize that information displayed in paracentral and near-peripheral vision can be better perceived while the user is maintaining eye contact during face-to-face conversations. Leveraging this idea, we designed a circular progress bar to provide progress updates in paracentral and near-peripheral vision. We compared it with textual and linear progress bars under two conversation settings: a simulated one with a digital conversation partner and a realistic one with a real partner. Results show that a circular progress bar can effectively reduce notification distractions without losing eye contact and is more preferred by users. Our findings highlight the potential of utilizing the paracentral and near-peripheral vision for secondary information presentation on OHMDs.
10
First Steps Towards Designing Electrotactons: Investigating Intensity and Pulse Frequency as Parameters for Electrotactile Cues.
Yosuef Alotaibi (University of Glasgow , Glasgow , United Kingdom)John H. Williamson (University of Glasgow, Glasgow, United Kingdom)Stephen Anthony. Brewster (University of Glasgow, Glasgow, United Kingdom)
Electrotactile stimulation is a novel form of haptic feedback. There is little work investigating its basic design parameters and how they create effective tactile cues. This paper describes two experiments that extend our knowledge of two key parameters. The first investigated the combination of pulse width and amplitude Intensity on sensations of urgency, annoyance, valence and arousal. Results showed significant effects: increasing Intensity caused higher ratings of urgency, annoyance and arousal but reduced valence. We established clear levels for differentiating each sensation. A second study then investigated Intensity and Pulse Frequency to find out how many distinguishable levels could be perceived. Results showed that both Intensity and Pulse Frequency significantly affected perception, with four distinguishable levels of Intensity and two of Pulse Frequency. These results add significant new knowledge about the parameter space of electrotactile cue design and help designers select suitable properties to use when creating electrotactile cues.