注目の論文一覧

各カテゴリ上位30論文までを表示しています

The ACM CHI Conference on Human Factors in Computing Systems (https://chi2021.acm.org/)

9
MARVIS: Combining Mobile Devices and Augmented Reality for Visual Data Analysis
Ricardo Langner (Technische Universität Dresden, Dresden, Germany)Marc Satkowski (Technische Universität Dresden, Dresden, Germany)Wolfgang Büschel (Technische Universität Dresden, Dresden, Germany)Raimund Dachselt (Technische Universität Dresden, Dresden, Germany)
We present MARVIS, a conceptual framework that combines mobile devices and head-mounted Augmented Reality (AR) for visual data analysis. We propose novel concepts and techniques addressing visualization-specific challenges. By showing additional 2D and 3D information around and above displays, we extend their limited screen space. AR views between displays as well as linking and brushing are also supported, making relationships between separated visualizations plausible. We introduce the design process and rationale for our techniques. To validate MARVIS' concepts and show their versatility and widespread applicability, we describe six implemented example use cases. Finally, we discuss insights from expert hands-on reviews. As a result, we contribute to a better understanding of how the combination of one or more mobile devices with AR can benefit visual data analysis. By exploring this new type of visualization environment, we hope to provide a foundation and inspiration for future mobile data visualizations.
9
Understanding the Design Space of Embodied Passwords based on Muscle Memory
Rosa van Koningsbruggen (Bauhaus-Universität Weimar, Weimar, Germany)Bart Hengeveld (Eindhoven University of Technology, Eindhoven, Netherlands)Jason Alexander (University of Bath, Bath, United Kingdom)
Passwords have become a ubiquitous part of our everyday lives, needed for every web-service and system. However, it is challenging to create safe and diverse alphanumeric passwords, and to recall them, imposing a cognitive burden on the user. Through consecutive experiments, we explored the movement space, affordances and interaction, and memorability of a tangible, handheld, embodied password. In this context, we found that: (1) a movement space of 200 mm × 200 mm is preferred; (2) each context has a perceived level of safety, which—together with the affordances and link to familiarity—influences how the password is performed. Furthermore, the artefact’s dimensions should be balanced within the design itself, with the user, and the context, but there is a trade-off between the perceived safety and ergonomics; and (3) the designed embodied passwords can be recalled for at least a week, with participants creating unique passwords which were reproduced consistently.
8
Towards an Understanding of Situated AR Visualization for Basketball Free-Throw Training
Tica Lin (Harvard University, Cambridge, Massachusetts, United States)Rishi Singh (Harvard University, Cambridge, Massachusetts, United States)Yalong Yang (Harvard University, Cambridge, Massachusetts, United States)Carolina Nobre (Harvard University, Cambridge, Massachusetts, United States)Johanna Beyer (Harvard University, Cambridge, Massachusetts, United States)Maurice Smith (Harvard University, Cambridge, Massachusetts, United States)Hanspeter Pfister (Harvard University, Cambridge, Massachusetts, United States)
We present an observational study to compare co-located and situated real-time visualizations in basketball free-throw training. Our goal is to understand the advantages and concerns of applying immersive visualization to real-world skill-based sports training and to provide insights for designing AR sports training systems. We design both a situated 3D visualization on a head-mounted display and a 2D visualization on a co-located display to provide immediate visual feedback on a player's shot performance. Using a within-subject study design with experienced basketball shooters, we characterize user goals, report on qualitative training experiences, and compare the quantitative training results. Our results show that real-time visual feedback helps athletes refine subsequent shots. Shooters in our study achieve greater angle consistency with our visual feedback. Furthermore, AR visualization promotes an increased focus on body form in athletes. Finally, we present suggestions for the design of future sports AR studies.
8
Vinci: An Intelligent Graphic Design System for Generating Advertising Posters
Shunan Guo (Tongji University, ShangHai, China)Zhuochen Jin (Tongji University, Shanghai, China)Fuling Sun (Tongji University, Shanghai, China)Jingwen Li (Intelligent Big Data Visualization Lab, Tongji University, China, Shanghai, China)Zhaorui Li (Tongji University, Shanghai, China)Yang Shi (Tongji College of Design and Innovation, Shanghai, China)Nan Cao (Tongji College of Design and Innovation, Shanghai, China)
Advertising posters are a commonly used form of information presentation to promote a product. Producing advertising posters often takes much time and effort of designers when confronted with abundant choices of design elements and layouts. This paper presents Vinci, an intelligent system that supports the automatic generation of advertising posters. Given the user-specified product image and taglines, Vinci uses a deep generative model to match the product image with a set of design elements and layouts for generating an aesthetic poster. The system also integrates online editing-feedback that supports users in editing the posters and updating the generated results with their design preference. Through a series of user studies and a Turing test, we found that Vinci can generate posters as good as human designers and that the online editing-feedback improves the efficiency in poster modification.
8
Data-Driven Mark Orientation for Trend Estimation in Scatterplots
Tingting Liu (School of Computer Science, Qingdao, Shandong, China)Xiaotong Li (School of Computer Science, Qingdao, Shandong, China)Chen Bao (Shandong University, Qingdao, Shandong, China)Michael Correll (Tableau Software, Seattle, Washington, United States)Changehe Tu (Shandong Univ., Qingdao, China)Oliver Deussen (University of Konstanz, Konstanz, Germany)Yunhai Wang (Shandong University, Qingdao, China)
A common task for scatterplots is communicating trends in bivariate data. However, the ability of people to visually estimate these trends is under-explored, especially when the data violate assumptions required for common statistical models, or visual trend estimates are in conflict with statistical ones. In such cases, designers may need to intervene and de-bias these estimations, or otherwise inform viewers about differences between statistical and visual trend estimations. We propose data-driven mark orientation as a solution in such cases, where the directionality of marks in the scatterplot guide participants when visual estimation is otherwise unclear or ambiguous. Through a set of laboratory studies, we investigate trend estimation across a variety of data distributions and mark directionalities, and find that data-driven mark orientation can help resolve ambiguities in visual trend estimates.
7
STREAM: Exploring the Combination of Spatially-Aware Tablets with Augmented Reality Head-Mounted Displays for Immersive Analytics
Sebastian Hubenschmid (University of Konstanz, Konstanz, Germany)Johannes Zagermann (University of Konstanz, Konstanz, Germany)Simon Butscher (University of Konstanz, Konstanz, Germany)Harald Reiterer (University of Konstanz, Konstanz, Germany)
Recent research in the area of immersive analytics demonstrated the utility of head-mounted augmented reality devices for visual data analysis. However, it can be challenging to use the by default supported mid-air gestures to interact with visualizations in augmented reality (e.g. due to limited precision). Touch-based interaction (e.g. via mobile devices) can compensate for these drawbacks, but is limited to two-dimensional input. In this work we present STREAM: Spatially-aware Tablets combined with Augmented Reality Head-Mounted Displays for the multimodal interaction with 3D visualizations. We developed a novel eyes-free interaction concept for the seamless transition between the tablet and the augmented reality environment. A user study reveals that participants appreciated the novel interaction concept, indicating the potential for spatially-aware tablets in augmented reality. Based on our findings, we provide design insights to foster the application of spatially-aware touch devices in augmented reality and research implications indicating areas that need further investigation.
7
Improving Viewing Experiences of First-Person Shooter Gameplays with Automatically-Generated Motion Effects
Gyeore Yun (POSTECH, Pohang, Korea, Republic of)Hyoseung Lee (POSTECH, Pohang, Gyeongsangbuk-do, Korea, Republic of)Sangyoon Han (Pohang University of Science and Technology (POSTECH), Pohang, Korea, Republic of)Seungmoon Choi (Pohang University of Science and Technology (POSTECH), Pohang, Gyeongbuk, Korea, Republic of)
In recent times, millions of people enjoy watching video gameplays at an eSports stadium or home. We seek a method that improves gameplay spectator or viewer experiences by presenting multisensory stimuli. Using a motion chair, we provide the motion effects automatically generated from the audiovisual stream to the viewers watching a first-person shooter (FPS) gameplay. The motion effects express the game character’s movement and gunfire action. We describe algorithms for the computation of such motion effects developed using computer vision techniques and deep learning. By a user study, we demonstrate that our method of providing motion effects significantly improves the viewing experiences of FPS gameplay. The contributions of this paper are with the motion synthesis algorithms integrated for FPS games and the empirical evidence for the benefits of experiencing multisensory gameplays.
7
Data@Hand: Fostering Visual Exploration of Personal Data on Smartphones Leveraging Speech and Touch Interaction
Young-Ho Kim (University of Maryland, College Park, Maryland, United States)Bongshin Lee (Microsoft Research, Redmond, Washington, United States)Arjun Srinivasan (Tableau Research, Seattle, Washington, United States)Eun Kyoung Choe (University of Maryland, College Park, Maryland, United States)
Most mobile health apps employ data visualization to help people view their health and activity data, but these apps provide limited support for visual data exploration. Furthermore, despite its huge potential benefits, mobile visualization research in the personal data context is sparse. This work aims to empower people to easily navigate and compare their personal health data on smartphones by enabling flexible time manipulation with speech. We designed and developed Data@Hand, a mobile app that leverages the synergy of two complementary modalities: speech and touch. Through an exploratory study with 13 long-term Fitbit users, we examined how multimodal interaction helps participants explore their own health data. Participants successfully adopted multimodal interaction (i.e., speech and touch) for convenient and fluid data exploration. Based on the quantitative and qualitative findings, we discuss design implications and opportunities with multimodal interaction for better supporting visual data exploration on mobile devices.
7
Teardrop Glasses: Pseudo Tears Induce Sadness in You and Those Around You
Shigeo Yoshida (The University of Tokyo, Tokyo, Japan)Takuji Narumi (the University of Tokyo, Tokyo, Japan)Tomohiro Tanikawa (the University of Tokyo, Tokyo, Japan)Hideaki Kuzuoka (The University of Tokyo, Bunkyo-ku, Tokyo, Japan)Michitaka Hirose (The University of Tokyo, Tokyo, Japan)
Emotional contagion is a phenomenon in which one's emotions are transmitted among individuals unconsciously by observing others' emotional expressions. In this paper, we propose a method for mediating people's emotions by triggering emotional contagion through artificial bodily changes such as pseudo tears. We focused on shedding tears because of the link to several emotions besides sadness. In addition, it is expected that shedding tears would induce emotional contagion because it is observable by others. We designed an eyeglasses-style wearable device, Teardrop glasses, that release water drops near the wearer's eyes. The drops flow down the cheeks and emulate real tears. The study revealed that artificial crying with pseudo tears increased sadness among both wearers and those observing them. Moreover, artificial crying attenuated happiness and positive feelings in observers. Our findings show that actual bodily changes are not necessary for inducing emotional contagion as artificial bodily changes are also sufficient.
7
Stereo-Smell via Electrical Trigeminal Stimulation
Jas Brooks (University of Chicago, Chicago, Illinois, United States)Shan-Yuan Teng (University of Chicago, Chicago, Illinois, United States)Jingxuan Wen (University of Chicago, Chicago, Illinois, United States)Romain Nith (University of Chicago, Chicago, Illinois, United States)Jun Nishida (University of Chicago, Chicago, Illinois, United States)Pedro Lopes (University of Chicago, Chicago, Illinois, United States)
We propose a novel type of olfactory device that creates a stereo-smell experience, i.e., directional information about the location of an odor, by rendering the readings of external odor sensors as trigeminal sensations using electrical stimulation of the user’s nasal septum. The key is that the sensations from the trigeminal nerve, which arise from nerve-endings in the nose, are perceptually fused with those of the olfactory bulb (the brain region that senses smells). As such, we propose that electrically stimulating the trigeminal nerve is an ideal candidate for stereo-smell augmentation/substitution that, unlike other approaches, does not require implanted electrodes in the olfactory bulb. To realize this, we engineered a self-contained device that users wear across their nasal septum. Our device outputs by stimulating the user’s trigeminal nerve using electrical impulses with variable pulse-widths; and it inputs by sensing the user’s inhalations using a photoreflector. It measures 10x23 mm and communicates with external gas sensors using Bluetooth. In our user study, we found the key electrical waveform parameters that enable users to feel an odor’s intensity (absolute electric charge) and direction (phase order and net charge). In our second study, we demonstrated that participants were able to localize a virtual smell source in the room by using our prototype without any previous training. Using these insights, our device enables expressive trigeminal sensations and could function as an assistive device for people with anosmia, who are unable to smell.
6
MetaMap: Supporting Visual Metaphor Ideation through Multi-dimensional Example-based Exploration
Youwen Kang (Hong Kong University of Science and Technology, Hong Kong, Hong Kong, China)Zhida Sun (Hong Kong University of Science and Technology, Hong Kong, China)Sitong Wang (Columbia University, New York, New York, United States)Zeyu Huang (Hong Kong University of Science and Technology, Hong Kong, China)Ziming Wu (Hong Kong University of Science and Technology, Hong Kong, China)Xiaojuan Ma (Hong Kong University of Science and Technology, Hong Kong, Hong Kong)
Visual metaphors, which are widely used in graphic design, can deliver messages in creative ways by fusing different objects. The keys to creating visual metaphors are diverse exploration and creative combinations, which is challenging with conventional methods like image searching. To streamline this ideation process, we propose to use a mind-map-like structure to recommend and assist users to explore materials. We present MetaMap, a supporting tool which inspires visual metaphor ideation through multi-dimensional example-based exploration. To facilitate the divergence and convergence of the ideation process, MetaMap provides 1) sample images based on keyword association and color filtering; 2) example-based exploration in semantics, color, and shape dimensions; and 3) thinking path tracking and idea recording. We conduct a within-subject study with 24 design enthusiasts by taking a Pinterest-like interface as the baseline. Our evaluation results suggest that MetaMap provides an engaging ideation process and helps participants create diverse and creative ideas.
6
Quantitative Data Visualisation on Virtual Globes
Kadek Ananta. Satriadi (Monash University, Melbourne, Australia)Barrett Ens (Monash University, Melbourne, Australia)Tobias Czauderna (Monash University, Victoria, Australia)Maxime Cordeil (Monash University, Melbourne, Australia)Bernhard Jenny (Monash University, Melbourne, Australia)
Geographic data visualisation on virtual globes is intuitive and widespread, but has not been thoroughly investigated. We explore two main design factors for quantitative data visualisation on virtual globes: i)~commonly used primitives (\textit{2D bar}, \textit{3D bar}, \textit{circle}) and ii)~the orientation of these primitives (\textit{tangential}, \textit{normal}, \textit{billboarded)}. We evaluate five distinctive visualisation idioms in a user study with 50 participants. The results show that aligning primitives tangentially on the globe’s surface decreases the accuracy of area-proportional circle visualisations, while the orientation does not have a significant effect on the accuracy of length-proportional bar visualisations. We also find that tangential primitives induce higher perceived mental load than other orientations. Guided by these results we design a novel globe visualisation idiom, \textit{Geoburst}, that combines a virtual globe and a radial bar chart. A preliminary evaluation reports potential benefits and drawbacks of the \textit{Geoburst} visualisation.
6
Unmaking: Enabling and Celebrating the Creative Material of Failure, Destruction, Decay, and Deformation
Katherine W. Song (UC Berkeley, Berkeley, California, United States)Eric Paulos (UC Berkeley, Berkeley, California, United States)
The access and growing ubiquity of digital fabrication has ushered in a celebration of creativity and ``making.'' However, the focus is often on the resulting static artifact or the creative process and tools to design it. We envision a post-making process that extends past these final static objects --- not just in their making but in their ``unmaking.'' By drawing from artistic movements such as Auto-Destructive Art, intentionally inverting well-established engineering principles of structurally sound designs, and safely misusing unstable materials, we demonstrate an important extension to making --- unmaking. In this paper, we provide designers with a new vocabulary of unmaking operations within standard 3D modeling tools. We demonstrate how such designs can be realized using a novel multi-material 3D printing process. Finally, we detail how unmaking allows designs to change over time, is an ally to sustainability and re-usability, and captures themes of ``aura,'' emotionality, and personalization.
6
Investigating the Homogenization of Web Design: A Mixed-Methods Approach
Samuel Goree (Indiana University, Bloomington, Indiana, United States)Bardia Doosti (Indiana University Bloomington, Bloomington, Indiana, United States)David Crandall (Indiana University, Bloomington, Indiana, United States)Norman Makoto. Su (Indiana University, Bloomington, Indiana, United States)
Visual design provides the backdrop to most of our interactions over the Internet, but has not received as much analytical attention as textual content. Combining computational with qualitative approaches, we investigate the growing concern that visual design of the World Wide Web has homogenized over the past decade. By applying computer vision techniques to a large data-set of representative websites images from 2003--2019, we show that designs have become significantly more similar since 2007, especially for page layouts where the average distance between sites decreased by over 30%. Synthesizing interviews from 11 experienced web design professionals with our computational analyses, we discuss causes of this homogenization including overlap in source code and libraries, color scheme standardization, and support for mobile devices. Our results seek to motivate future discussion of the factors that influence designers and their implications on the future trajectory of web design.
6
IdeaBot: Investigating Social Facilitation in Human-Machine Team Creativity
Angel Hsing-Chi Hwang (Cornell University, Ithaca, New York, United States)Andrea Stevenson Won (Cornell University, Ithaca, New York, United States)
The present study investigates how human subjects collaborate with a computer-mediated chatbot in creative idea generation tasks. In three text-based between-group studies, we tested whether the perceived identity (i.e.,whether the bot is perceived as a machine or as a human) or the conversational style of a teammate would moderate the outcomes of participants’ creative production. In Study 1, participants worked with either a chatbot or a human confederate. In Study 2, all participants worked with a human teammate but were informed that their partner was either a human or a chatbot. Conversely, all participants worked with a chatbot in Study 3, but were told the identity of their partner was either a chatbot or a human. We investigated differences in idea generation outcomes and found that participants consistently contributed more ideas and with ideas of higher quality when they perceived their teamworking partner as a bot. Furthermore, when the conversational style of the partner was robotic, participants with high anxiety in group communication reported greater creative self-efficacy in task performance. Finally, whether the perceived dominance of a partner and the pressure to come up with ideas during the task mediated positive outcomes of idea generation also depends on whether the conversational style of the bot partner was robot- or human-like. Based on our findings, we discussed implications for future design of artificial agents as active team players in collaboration tasks.
5
More Kawaii than a Real-Person Streamer: Understanding How the Otaku Community Engages with and Perceives Virtual YouTubers
Zhicong Lu (City University of Hong Kong, Hong Kong, China)Chenxinran Shen (University of Toronto, Toronto, Ontario, Canada)Jiannan Li (University of Toronto, Toronto, Ontario, Canada)Hong Shen (Carnegie Mellon University , Pittsburgh, Pennsylvania, United States)Daniel Wigdor (University of Toronto, Toronto, Ontario, Canada)
Live streaming has become increasingly popular, with most streamers presenting their real-life appearance. However, Virtual YouTubers (VTubers), virtual 2D or 3D avatars that are voiced by humans, are emerging as live streamers and attracting a growing viewership in East Asia. Although prior research has found that many viewers seek real-life interpersonal interactions with real-person streamers, it is currently unknown what makes VTuber live streams engaging or how they are perceived differently than real-person streamers. We conducted an interview study to understand how viewers engage with VTubers and perceive the identities of the voice actors behind the avatars (i.e., Nakanohito). The data revealed that Virtual avatars bring unique performative opportunities which result in different viewer expectations and interpretations of VTuber behavior. Viewers intentionally upheld the disembodiment of VTuber avatars from their voice actors. We uncover the nuances in viewer perceptions and attitudes and further discuss the implications of VTuber practices to the understanding of live streaming in general.
5
How WEIRD is CHI?
Sebastian Linxen (University of Basel, Basel, Switzerland)Christian Sturm (Hamm-Lippstadt University of Applied Sciences, Lippstadt, North Rhine Westphalia, Germany)Florian Brühlmann (University of Basel, Basel, Switzerland)Vincent Cassau (Hochschule Hamm- Lippstadt, Lippstadt, North- Rhine- Westphalia, Germany)Klaus Opwis (University of Basel, Basel, Switzerland)Katharina Reinecke (University of Washington, Seattle, Washington, United States)
Computer technology is often designed in technology hubs in Western countries, invariably making it "WEIRD", because it is based on the intuition, knowledge, and values of people who are Western, Educated, Industrialized, Rich, and Democratic. Developing technology that is universally useful and engaging requires knowledge about members of WEIRD and non-WEIRD societies alike. In other words, it requires us, the CHI community, to generate this knowledge by studying representative participant samples. To find out to what extent CHI participant samples are from Western societies, we analyzed papers published in the CHI proceedings between 2016-2020. Our findings show that 73% of CHI study findings are based on Western participant samples, representing less than 12% of the world's population. Furthermore, we show that most participant samples at CHI tend to come from industrialized, rich, and democratic countries with generally highly educated populations. Encouragingly, recent years have seen a slight increase in non-Western samples and those that include several countries. We discuss suggestions for further broadening the international representation of CHI participant samples.
5
GestureMap: Supporting Visual Analytics and Quantitative Analysis of Motion Elicitation Data by Learning 2D Embeddings
Hai Duong. Dang (University of Bayreuth, Bayreuth, Germany)Daniel Buschek (University of Bayreuth, Bayreuth, Germany)
This paper presents GestureMap, a visual analytics tool for gesture elicitation which directly visualises the space of gestures. Concretely, a Variational Autoencoder embeds gestures recorded as 3D skeletons on an interactive 2D map. GestureMap further integrates three computational capabilities to connect exploration to quantitative measures: Leveraging DTW Barycenter Averaging (DBA), we compute average gestures to 1) represent gesture groups at a glance; 2) compute a new consensus measure (variance around average gesture); and 3) cluster gestures with k-means. We evaluate GestureMap and its concepts with eight experts and an in-depth analysis of published data. Our findings show how GestureMap facilitates exploring large datasets and helps researchers to gain a visual understanding of elicited gesture spaces. It further opens new directions, such as comparing elicitations across studies. We discuss implications for elicitation studies and research, and opportunities to extend our approach to additional tasks in gesture elicitation.
5
Haptic and Visual Comprehension of a 2D Graph Layout Through Physicalisation
Adam Drogemuller (University of South Australia, Mawson Lakes, South Australia, Australia)Andrew Cunningham (University of South Australia, Adelaide, Australia)James A. Walsh (University of South Australia, Mawson Lakes, South Australia, Australia)James Baumeister (University of South Australia, Adelaide, South Australia, Australia)Ross T. Smith (University of South Australia, Adelaide, Australia)Bruce H. Thomas (University of South Australia, Mawson Lakes, South Australia, Australia)
Data physicalisations afford people the ability to directly interact with data using their hands, potentially achieving a more comprehensive understanding of a dataset. Due to their complex nature, the representation of graphs and networks could benefit from physicalisation, bringing the dataset from the digital world into the physical one. However, no empirical work exists investigating the effects physicalisations have upon comprehension as they relate to graph representations. In this work, we present initial design considerations for graph physicalisations, as well as an empirical study investigating differences in comprehension between virtual and physical representations. We found that participants perceived themselves as being more accurate via touch and sight (visual-haptic) than the graphical-only modality, and perceived a triangle count task as less difficult in visual-haptic than in the graphical-only modality. Additionally, we found that participants significantly preferred interacting with visual-haptic over other conditions, despite no significant effect on task time or error.
5
HairTouch: Providing Stiffness, Roughness and Surface Height Differences Using Reconfigurable Brush Hairs on a VR Controller
Chi-Jung Lee (National Taiwan University, Taipei, Taiwan)Hsin-Ruey Tsai (National Chengchi University, Taipei, Taiwan)Bing-Yu Chen (National Taiwan University, Taipei, Taiwan)
Tactile feedback is widely used to enhance realism in virtual reality (VR). When touching virtual objects, stiffness and roughness are common and obvious factors perceived by the users. Furthermore, when touching a surface with complicated surface structure, differences from not only stiffness and roughness but also surface height are crucial. To integrate these factors, we propose a pin-based handheld device, HairTouch, to provide stiffness differences, roughness differences, surface height differences and their combinations. HairTouch consists of two pins for the two finger segments close to the index fingertip, respectively. By controlling brush hairs' length and bending direction to change the hairs' elasticity and hair tip direction, each pin renders various stiffness and roughness, respectively. By further independently controlling the hairs' configuration and pins' height, versatile stiffness, roughness and surface height differences are achieved. We conducted a perception study to realize users' distinguishability of stiffness and roughness on each of the segments. Based on the results, we performed a VR experience study to verify that the tactile feedback from HairTouch enhances VR realism.
5
Touch&Fold: A Foldable Haptic Actuator for Rendering Touch in Mixed Reality
Shan-Yuan Teng (University of Chicago, Chicago, Illinois, United States)Pengyu Li (University of Chicago, Chicago, Illinois, United States)Romain Nith (University of Chicago, Chicago, Illinois, United States)Joshua Fonseca (University of Chicago, Chicago, Illinois, United States)Pedro Lopes (University of Chicago, Chicago, Illinois, United States)
We propose a nail-mounted foldable haptic device that provides tactile feedback to mixed reality (MR) environments by pressing against the user’s fingerpad when a user touches a virtual object. What is novel in our device is that it quickly tucks away when the user interacts with real-world objects. Its design allows it to fold back on top of the user’s nail when not in use, keeping the user’s fingerpad free to, for instance, manipulate handheld tools and other objects while in MR. To achieve this, we engineered a wireless and self-contained haptic device, which measures 24×24×41 mm and weighs 9.5 g. Furthermore, our foldable end-effector also features a linear resonant actuator, allowing it to render not only touch contacts (i.e., pressure) but also textures (i.e., vibrations). We demonstrate how our device renders contacts with MR surfaces, buttons, low- and high-frequency textures. In our first user study, we found that participants perceived our device to be more realistic than a previous haptic device that also leaves the fingerpad free (i.e., fingernail vibration). In our second user study, we investigated the participants’ experience while using our device in a real-world task that involved physical objects. We found that our device allowed participants to use the same finger to manipulate handheld tools, small objects, and even feel textures and liquids, without much hindrance to their dexterity, while feeling haptic feedback when touching MR interfaces.
5
VisiFit: Structuring Iterative Improvement for Novice Designers
Lydia B. Chilton (Columbia University, New York, New York, United States)Ecenaz Jen. Ozmen (Columbia University, New York, New York, United States)Sam H. Ross (Barnard College, New York, New York, United States)Vivian Liu (Columbia University, New York, New York, United States)
Visual blends are an advanced graphic design technique to seamlessly integrate two objects into one. Existing tools help novices create prototypes of blends, but it is unclear how they would improve them to be higher fidelity. To help novices, we aim to add structure to the iterative improvement process. We introduce a method for improving prototypes that uses secondary design dimensions to explore a structured design space. This method is grounded in the cognitive principles of human visual object recognition. We present VisiFit – a computational design system that uses this method to enable novice graphic designers to improve blends with computationally generated options they can select, adjust, and chain together. Our evaluation shows novices can substantially improve 76% of blends in under 4 minutes. We discuss how the method can be generalized to other blending problems, and how computational tools can support novices by enabling them to explore a structured design space quickly and efficiently.
5
RubySlippers: Supporting Content-based Voice Navigation for How-to Videos
Minsuk Chang (KAIST, Daejeon, Korea, Republic of)Mina Huh (KAIST, Daejeon, Korea, Republic of)Juho Kim (KAIST, Daejeon, Korea, Republic of)
Directly manipulating the timeline, such as scrubbing for thumbnails, is the standard way of controlling how-to videos. However, when how-to videos involve physical activities, people inconveniently alternate between controlling the video and performing the tasks. Adopting a voice user interface allows people to control the video with voice while performing the tasks with hands. However, naively translating timeline manipulation into voice user interfaces (VUI) results in temporal referencing (e.g. ``rewind 20 seconds''), which requires a different mental model for navigation and thereby limiting users' ability to peek into the content. We present RubySlippers, a system that supports efficient content-based voice navigation through keyword-based queries. Our computational pipeline automatically detects referenceable elements in the video, and finds the video segmentation that minimizes the number of needed navigational commands. Our evaluation (N=12) shows that participants could perform three representative navigation tasks with fewer commands and less frustration using RubySlippers than the conventional voice-enabled video interface.
5
Hidden Interaction Techniques: Concealed Information Acquisition and Texting on Smartphones and Wearables
Ville Mäkelä (LMU Munich, Munich, Germany)Johannes Kleine (LMU Munich, Munich, Germany)Maxine Hood (Wellesley College, Wellesley, Massachusetts, United States)Florian Alt (Bundeswehr University Munich, Munich, Germany)Albrecht Schmidt (LMU Munich, Munich, Germany)
There are many situations where using personal devices is not socially acceptable, or where nearby people present a privacy risk. For these situations, we explore the concept of hidden interaction techniques through two prototype applications. HiddenHaptics allows users to receive information through vibrotactile cues on a smartphone, and HideWrite allows users to write text messages by drawing on a dimmed smartwatch screen. We conducted three user studies to investigate whether, and how, these techniques can be used without being exposed. Our primary findings are (1) users can effectively hide their interactions while attending to a social situation, (2) users seek to interact when another person is speaking, and they also tend to hide the interaction using their body or furniture, and (3) users can sufficiently focus on the social situation despite their interaction, whereas non-users feel that observing the user hinders their ability to focus on the social activity.
5
UMLAUT: Debugging Deep Learning Programs using Program Structure and Model Behavior
Eldon Schoop (University of California, Berkeley, Berkeley, California, United States)Forrest Huang (University of California, Berkeley, Berkeley, California, United States)Bjoern Hartmann (UC Berkeley, Berkeley, California, United States)
Training deep neural networks can generate non-descriptive error messages or produce unusual output without any explicit errors at all. While experts rely on tacit knowledge to apply debugging strategies, non-experts lack the experience required to interpret model output and correct Deep Learning (DL) programs. In this work, we identify DL debugging heuristics and strategies used by experts, and use them to guide the design of Umlaut. Umlaut checks DL program structure and model behavior against these heuristics; provides human-readable error messages to users; and annotates erroneous model output to facilitate error correction. Umlaut links code, model output, and tutorial-driven error messages in a single interface. We evaluated Umlaut in a study with 15 participants to determine its effectiveness in helping developers find and fix errors in their DL programs. Participants using Umlaut found and fixed significantly more bugs compared to a baseline condition.
5
Visuo-haptic Illusions for Linear Translation and Stretching using Physical Proxies in Virtual Reality
Martin Feick (Saarland Informatics Campus, Saarbrücken, Germany)Niko Kleer (Saarland Informatics Campus, Saarbrücken, Germany)André Zenner (Saarland University, Saarland Informatics Campus, Saarbrücken, Germany)Anthony Tang (University of Toronto, Toronto, Ontario, Canada)Antonio Krüger (DFKI, Saarland Informatics Campus, Saarbrücken, Germany)
Providing haptic feedback when manipulating virtual objects is an essential part of immersive virtual reality experiences; however, it is challenging to replicate all of an object’s properties and characteristics. We propose the use of visuo-haptic illusions alongside physical proxies to enhance the scope of proxy-based interactions with virtual objects. In this work, we focus on two manipulation techniques, linear translation and stretching across different distances, and investigate how much discrepancy between the physical proxy and the virtual object may be introduced without participants noticing. In a study with 24 participants, we found that manipulation technique and travel distance significantly affect the detection thresholds, and that visuo-haptic illusions impact performance and accuracy. We show that this technique can be used to enable functional proxy objects that act as stand-ins for multiple virtual objects, illustrating the technique through a showcase VR-DJ application.
5
Collecting and Characterizing Natural Language Utterances for Specifying Data Visualizations
Arjun Srinivasan (Tableau Research, Seattle, Washington, United States)Nikhila Nyapathy (Georgia Institute of Technology, Atlanta, Georgia, United States)Bongshin Lee (Microsoft Research, Redmond, Washington, United States)Steven M.. Drucker (Microsoft Research, Redmond, Washington, United States)John Stasko (Georgia Institute of Technology, Atlanta, Georgia, United States)
Natural language interfaces (NLIs) for data visualization are becoming increasingly popular both in academic research and in commercial software. Yet, there is a lack of empirical understanding of how people specify visualizations through natural language. We conducted an online study (N = 102), showing participants a series of visualizations and asking them to provide utterances they would pose to generate the displayed charts. From the responses, we curated a dataset of 893 utterances and characterized the utterances according to (1) their phrasing (e.g., commands, queries, questions) and (2) the information they contained (e.g., chart types, data aggregations). To help guide future research and development, we contribute this utterance dataset and discuss its applications toward the creation and benchmarking of NLIs for visualization.
5
Scene-Aware Behavior Synthesis for Virtual Pets in Mixed Reality
Wei Liang (Beijing Institute of Technology, Beijing, China)Xinzhe Yu (Beijing Institute of Technology, Beijing, China)Rawan Alghofaili (George Mason University, Fairfax, Virginia, United States)Yining Lang (Alibaba Group, beijing, China)Lap-Fai Yu (George Mason University, Fairfax, Virginia, United States)
Virtual pets are an alternative to real pets, providing a substitute for people with allergies or preparing people for adopting a real pet. Recent advancements in mixed reality pave the way for virtual pets to provide a more natural and seamless experience for users. However, one key challenge is embedding environmental awareness into the virtual pet (e.g., identifying the food bowl's location) so that they can behave naturally in the real world. We propose a novel approach to synthesize virtual pet behaviors by considering scene semantics, enabling a virtual pet to behave naturally in mixed reality. Given a scene captured from the real world, our approach synthesizes a sequence of pet behaviors (e.g., resting after eating). Then, we assign each behavior in the sequence to a location in the real scene. We conducted user studies to evaluate our approach, which showed the efficacy of our approach in synthesizing natural virtual pet behaviors.
5
Datamations: Animated Explanations of Data Analysis Pipelines
Xiaoying Pu (University of Michigan, Ann Arbor, Michigan, United States)Sean Kross (The University of California San Diego, La Jolla, California, United States)Jake M. Hofman (Microsoft Research, NYC, New York, United States)Daniel G. Goldstein (Microsoft Research, New York, New York, United States)
Plots and tables are commonplace in today's data-driven world, and much research has been done on how to make these figures easy to read and understand. Often times, however, the information they contain conveys only the end result of a complex and subtle data analysis pipeline. This can leave the reader struggling to understand what steps were taken to arrive at a figure, and what implications this has for the underlying results. In this paper, we introduce datamations, which are animations designed to explain the steps that led to a given plot or table. We present the motivation and concept behind datamations, discuss how to programmatically generate them, and provide the results of two large-scale randomized experiments investigating how datamations affect people's abilities to understand potentially puzzling results compared to seeing only final plots and tables containing those results.
5
TexYZ: Embroidering Enameled Wires for Three Degree-of-Freedom Mutual Capacitive Sensing
Roland Aigner (University of Applied Sciences Upper Austria, Hagenberg, Austria)Andreas Pointner (University of Applied Sciences Upper Austria, Hagenberg, Austria)Thomas Preindl (University of Applied Sciences Upper Austria, Hagenberg, Austria)Rainer Danner (University of Applied Sciences Upper Austria, Hagenberg, Austria)Michael Haller (University of Applied Sciences Upper Austria, Hagenberg, Austria)
In this paper, we present TexYZ, a method for rapid and effortless manufacturing of textile mutual capacitive sensors using a commodity embroidery machine. We use enameled wire as a bobbin thread to yield textile capacitors with high quality and consistency. As a consequence, we are able to leverage the precision and expressiveness of projected mutual capacitance for textile electronics, even when size is limited. Harnessing the assets of machine embroidery, we implement and analyze five distinct electrode patterns, examine the resulting electrical features with respect to geometrical attributes, and demonstrate the feasibility of two promising candidates for small-scale matrix layouts. The resulting sensor patches are further evaluated in terms of capacitance homogeneity, signal-to-noise ratio, sensing range, and washability. Finally, we demonstrate two use case scenarios, primarily focusing on continuous input with up to three degrees-of-freedom.
5
ArticuLev: An Integrated Self-Assembly Pipeline for Articulated Multi-Bead Levitation Primitives
Andreas Rene. Fender (ETH, Zurich, Switzerland)Diego Martinez Plasencia (University College of London, London, United Kingdom)Sriram Subramanian (University College London, London, United Kingdom)
Acoustic levitation is gaining popularity as an approach to create physicalized mid-air content by levitating different types of levitation primitives. Such primitives can be independent particles or particles that are physically connected via threads or pieces of cloth to form shapes in mid-air. However, initialization (i.e., placement of such primitives in their mid-air target locations) currently relies on either manual placement or specialized ad-hoc implementations, which limits their practical usage. We present ArticuLev, an integrated pipeline that deals with the identification, assembly and mid-air placement of levitated shape primitives. We designed ArticuLev with the physical properties of commonly used levitation primitives in mind. It enables experiences that seamlessly combine different primitives into meaningful structures (including fully articulated animated shapes) and supports various levitation display approaches (e.g., particles moving at high speed). In this paper, we describe our pipeline and demonstrate it with heterogeneous combinations of levitation primitives.
5
When the Social Becomes Non-Human: A Study of Young People’s Perception of Social Support in Chatbots
Petter Bae Brandtzæg (University of Oslo, Oslo, Norway)Marita Skjuve (SINTEF Digital, Oslo, Norway)Kim Kristoffer Dysthe (University of Oslo, Oslo, Norway)Asbjørn Følstad (SINTEF, Oslo, Norway)
Although social support is important for health and well-being, many young people are hesitant to reach out for support. The emerging uptake of chatbots for social and emotional purposes entails opportunities and concerns regarding non-human agents as sources of social support. To explore this, we invited 16 participants (16–21 years) to use and reflect on chatbots as sources of social support. Our participants first interacted with a chatbot for mental health (Woebot) for two weeks. Next, they participated in individual in-depth interviews. As part of the interview session, they were presented with a chatbot prototype providing information to young people. Two months later, the participants reported on their continued use of Woebot. Our findings provide in-depth knowledge about how young people may experience various types of social support—appraisal, informational, emotional, and instrumental support—from chatbots. We summarize implications for theory, practice, and future research.
5
Oh, Snap! A Fabrication Pipeline to Magnetically Connect Conventional and 3D-Printed Electronics
Martin Schmitz (Technical University of Darmstadt, Darmstadt, Germany)Jan Riemann (Technical University of Darmstadt, Darmstadt, Germany)Florian Müller (TU Darmstadt, Darmstadt, Germany)Steffen Kreis (TU Darmstadt, Darmstadt, Germany)Max Mühlhäuser (TU Darmstadt, Darmstadt, Germany)
3D printing has revolutionized rapid prototyping by speeding up the creation of custom-shaped objects. With the rise of multi-material 3Dprinters, these custom-shaped objects can now be made interactive in a single pass through passive conductive structures. However, connecting conventional electronics to these conductive structures often still requires time-consuming manual assembly involving many wires, soldering or gluing. To alleviate these shortcomings, we propose Oh, Snap!: a fabrication pipeline and interfacing concept to magnetically connect a 3D-printed object equipped with passive sensing structures to conventional sensing electronics. To this end, Oh, Snap! utilizes ferromagnetic and conductive 3D-printed structures, printable in a single pass on standard printers. We further present a proof-of-concept capacitive sensing board that enables easy and robust magnetic assembly to quickly create interactive 3D-printed objects. We evaluate Oh, Snap! by assessing the robustness and quality of the connection and demonstrate its broad applicability by a series of example applications.
5
Prepare for Trouble and Make It Double: The Power Motive Predicts Pokémon Choices Based on Apparent Strength
Susanne Poeller (University of Trier, Trier, Germany)Karla Waldenmeier (University of Trier, Trier, Germany)Nicola Baumann (University of Trier, Trier, Germany)Regan L. Mandryk (University of Saskatchewan, Saskatoon, Saskatchewan, Canada)
Two social motives are distinguished by Motive Disposition Theory: affiliation and power. Motives orient, select and energize our behaviour, suggesting that the choices of power-motivated individuals should be guided by power cues, such as the appearance of strength in a game character or avatar. In study 1 we demonstrate that participants were more likely to pick strong-looking Pokémon for a fight and cute Pokémon as a companion. In addition, we show that even when considering these contexts, the power motive predicts preferences for a powerful appearance, whereas affiliation does not. In study 2 we replicate the study 1 findings and distinguish between two ways to enact the power motive (prosocial and dominant power). We demonstrate that the dominance, but not the prosociality, facet drives the preference for strong-looking Pokémon. Our findings suggest that the need to influence others—the power motive—drives the choice for battle companions who symbolize strength.
5
Sketchnote Components, Design Space Dimensions, and Strategies for Effective Visual Note Taking
Rebecca Zheng (University College London, London, United Kingdom)Marina Fernández Camporro (University College London, London, United Kingdom)Hugo Romat (ETH, Zurich, Switzerland)Nathalie Henry Riche (Microsoft Research, Redmond, Washington, United States)Benjamin Bach (Edinburgh University, Edinburgh, United Kingdom)Fanny Chevalier (University of Toronto, Toronto, Ontario, Canada)Ken Hinckley (Microsoft Research, Redmond, Washington, United States)Nicolai Marquardt (University College London, London, United Kingdom)
Sketchnoting is a form of visual note taking where people listen to, synthesize, and visualize ideas from a talk or other event using a combination of pictures, diagrams, and text. Little is known about the design space of this kind of visual note taking. With an eye towards informing the implementation of digital equivalents of sketchnoting, inking, and note taking, we introduce a classification of sketchnote styles and techniques, with a qualitative analysis of 103 sketchnotes, and situated in context with six semi-structured follow up interviews. Our findings distill core sketchnote components (content, layout, structuring elements, and visual styling) and dimensions of the sketchnote design space, classifying levels of conciseness, illustration, structure, personification, cohesion, and craftsmanship. We unpack strategies to address particular note taking challenges, for example dealing with constraints of live drawings, and discuss relevance for future digital inking tools, such as recomposition, styling, and design suggestions.
5
MIRIA: A Mixed Reality Toolkit for the In-Situ Visualization and Analysis of Spatio-Temporal Interaction Data
Wolfgang Büschel (Technische Universität Dresden, Dresden, Germany)Anke Lehmann (Technische Universität Dresden, Dresden, Germany)Raimund Dachselt (Technische Universität Dresden, Dresden, Germany)
In this paper, we present MIRIA, a Mixed Reality Interaction Analysis toolkit designed to support the in-situ visual analysis of user interaction in mixed reality and multi-display environments. So far, there are few options to effectively explore and analyze interaction patterns in such novel computing systems. With MIRIA, we address this gap by supporting the analysis of user movement, spatial interaction, and event data by multiple, co-located users directly in the original environment. Based on our own experiences and an analysis of the typical data, tasks, and visualizations used in existing approaches, we identify requirements for our system. We report on the design and prototypical implementation of MIRIA, which is informed by these requirements and offers various visualizations such as 3D movement trajectories, position heatmaps, and scatterplots. To demonstrate the value of MIRIA for real-world analysis tasks, we conducted expert feedback sessions using several use cases with authentic study data.
5
Can You Hear My Heartbeat?: Hearing an Expressive Biosignal Elicits Empathy
R. Michael Winters (Georgia Institute of Technology, Atlanta, Georgia, United States)Bruce N.. Walker (Georgia Institute of Technology , Atlanta, Georgia, United States)Grace Leslie (Georgia Tech, Atlanta, Georgia, United States)
Interfaces designed to elicit empathy provide an opportunity for HCI with important pro-social outcomes. Recent research has demonstrated that perceiving expressive biosignals can facilitate emotional understanding and connection with others, but this work has been largely limited to visual approaches. We propose that hearing these signals will also elicit empathy, and test this hypothesis with sounding heartbeats. In a lab-based within-subjects study, participants (N=27) completed an emotion recognition task in different heartbeat conditions. We found that hearing heartbeats changed participants’ emotional perspective and increased their reported ability to “feel what the other was feeling.” From these results, we argue that auditory heartbeats are well-suited as an empathic intervention, and might be particularly useful for certain groups and use-contexts because of its musical and non-visual nature. This work establishes a baseline for empathic auditory interfaces, and offers a method to evaluate the effects of future designs.
5
Assessing Social Anxiety Through Digital Biomarkers Embedded in a Gaming Task
Martin Johannes. Dechant (University of Saskatchewan, Saskatoon, Saskatchewan, Canada)Julian Frommel (University of Saskatchewan, Saskatoon, Saskatchewan, Canada)Regan L. Mandryk (University of Saskatchewan, Saskatoon, Saskatchewan, Canada)
Digital biomarkers of mental health issues offer many advantages, including timely identification for early intervention, ongoing assessment during treatment, and reducing barriers to assessment stemming from geography, age, fear, or disparities in access to systems of care. Embedding digital biomarkers into games may further increase the reach of digital assessment. In this study, we explore game-based digital biomarkers for social anxiety, based on interaction with a non-player character (NPC). We show that social anxiety affects a player’s accuracy and their movement path in a gaming task involving an NPC. Further, we compared first versus third-person camera perspectives and the use of customized versus predefined avatars to explore the influence of common game interface factors on the expression of social anxiety through in-game movements. Our findings provide new insights about how game-based digital biomarkers can be effectively used for social anxiety, affording the benefits of early and ongoing digital assessment.
5
Soloist: Generating Mixed-Initiative Tutorials from Existing Guitar Instructional Videos Through Audio Processing
Bryan Wang (University of Toronto, Toronto, Ontario, Canada)Mengyu Yang (University of Toronto, Toronto, Ontario, Canada)Tovi Grossman (University of Toronto, Toronto, Ontario, Canada)
Learning musical instruments using online instructional videos has become increasingly prevalent. However, pre-recorded videos lack the instantaneous feedback and personal tailoring that human tutors provide. In addition, existing video navigations are not optimized for instrument learning, making the learning experience encumbered. Guided by our formative interviews with guitar players and prior literature, we designed Soloist, a mixed-initiative learning framework that automatically generates customizable curriculums from off-the-shelf guitar video lessons. Soloist takes raw videos as input and leverages deep-learning based audio processing to extract musical information. This back-end processing is used to provide an interactive visualization to support effective video navigation and real-time feedback on the user’s performance, creating a guided learning experience. We demonstrate the capabilities and specific use-cases of Soloist within the domain of learning electric guitar solos using instructional YouTube videos. A remote user study, conducted to gather feedback from guitar players, shows encouraging results as the users unanimously preferred learning with Soloist over unconverted instructional videos.
5
GuideCopter - A Precise Drone-Based Haptic Guidance Interface for Blind or Visually Impaired People
Felix Huppert (University of Passau, Passau, Bavaria, Germany)Gerold Hoelzl (University of Passau, Passau, Bavaria, Germany)Matthias Kranz (University of Passau, Passau, Bavaria, Germany)
Drone assisted navigation aids for supporting walking activities of visually impaired have been established in related work but fine-point object grasping tasks and the object localization in unknown environments still presents an open and complex challenge. We present a drone-based interface that provides fine-grain haptic feedback and thus physically guides them in hand-object localization tasks in unknown surroundings. Our research is built around community groups of blind or visually impaired (BVI) people, which provide in-depth insights during the development process and serve later as study participants. A pilot study infers users' sensibility to applied guiding stimuli forces and the different human-drone tether interfacing possibilities. In a comparative follow-up study, we show that our drone-based approach achieves greater accuracy compared to a current audio-based hand guiding system and delivers overall a more intuitive and relatable fine-point guiding experience.
5
Pose-on-the-Go: Approximating User Pose with Smartphone Sensor Fusion and Inverse Kinematics
Karan Ahuja (Carnegie Mellon University, Pittsburgh, Pennsylvania, United States)Sven Mayer (Carnegie Mellon University, Pittsburgh, Pennsylvania, United States)Mayank Goel (Carnegie Mellon University, Pittsburgh, Pennsylvania, United States)Chris Harrison (Carnegie Mellon University, Pittsburgh, Pennsylvania, United States)
We present Pose-on-the-Go, a full-body pose estimation system that uses sensors already found in today’s smartphones. This stands in contrast to prior systems, which require worn or external sensors. We achieve this result via extensive sensor fusion, leveraging a phone’s front and rear cameras, the user-facing depth camera, touchscreen, and IMU. Even still, we are missing data about a user’s body (e.g., angle of the elbow joint), and so we use inverse kinematics to estimate and animate probable body poses. We provide a detailed evaluation of our system, benchmarking it against a professional-grade Vicon tracking system. We conclude with a series of demonstration applications that underscore the unique potential of our approach, which could be enabled on many modern smartphones with a simple software update.
4
Fork It: Supporting Stateful Alternatives in Computational Notebooks
Nathaniel Weinman (University of California, Berkeley, Berkeley, California, United States)Steven M.. Drucker (Microsoft Research, Redmond, Washington, United States)Titus Barik (Microsoft, Redmond, Washington, United States)Robert A. DeLine (Microsoft Corp, Redmond, Washington, United States)
Computational notebooks, which seamlessly interleave code with results, have become a popular tool for data scientists due to the iterative nature of exploratory tasks. However, notebooks provide a single execution state for users to manipulate through creating and manipulating variables. When exploring alternatives, data scientists must carefully create many-step manipulations in visually distant cells. We conducted formative interviews with 6 professional data scientists, motivating design principles behind exposing multiple states. We introduce forking --- creating a new interpreter session --- and backtracking --- navigating through previous states. We implement these interactions as an extension to notebooks that help data scientists more directly express and navigate through decision points a single notebook. In a qualitative evaluation, 11 professional data scientists found the tool would be useful for exploring alternatives and debugging code to create a predictive model. Their insights highlight further challenges to scaling this functionality.
4
Let’s Frets! Assisting Guitar Students during Practice via Capacitive Sensing
Karola Marky (Technische Universität Darmstadt, Darmstadt, Germany)Andreas Weiß (Music School Schallkultur, Kaiserslautern, Germany)Andrii Matviienko (Technical University of Darmstadt, Darmstadt, Germany)Florian Brandherm (Technische Universität Darmstadt, Darmstadt, Germany)Sebastian Wolf (Technische Universität Darmstadt, Darmstadt, Germany)Martin Schmitz (Technical University of Darmstadt, Darmstadt, Germany)Florian Krell (TU Darmstadt, Darmstadt, Germany)Florian Müller (TU Darmstadt, Darmstadt, Germany)Max Mühlhäuser (TU Darmstadt, Darmstadt, Germany)Thomas Kosch (Technische Universität Darmstadt, Darmstadt, Germany)
Learning a musical instrument requires regular exercise. However, students are often on their own during their practice sessions due to the limited time with their teachers, which increases the likelihood of mislearning playing techniques. To address this issue, we present Let's Frets - a modular guitar learning system that provides visual indicators and capturing of finger positions on a 3D-printed capacitive guitar fretboard. We based the design of Let's Frets on requirements collected through in-depth interviews with professional guitarists and teachers. In a user study (N=24), we evaluated the feedback modules of Let's Frets against fretboard charts. Our results show that visual indicators require the least time to realize new finger positions while a combination of visual indicators and position capturing yielded the highest playing accuracy. We conclude how Let's Frets enables independent practice sessions that can be translated to other musical instruments.
4
The Design Space of Wearables for Sports and Fitness Practices
Laia Turmo Vidal (Uppsala University, Uppsala, Sweden)Hui Zhu (Uppsala University, Uppsala, Sweden)Annika Waern (Dept of Informatics and Media, Uppsala, Sweden)Elena Márquez Segura (Universidad Carlos III de Madrid, Madrid, Spain)
The growing interest in wearables for sports and fitness calls for design knowledge and conceptualizations that can help shape future designs. Towards that end, we present and discuss a design space of wearables for these practices, based on a survey of previous work. Through a thematic analysis of 47 research publications in the domain, we surface core design decisions concerning wearability, technology design, and wearable use in practice. Building on these, we show how the design space takes into account the goals of introducing technology, that design decisions can be either directly designed, or left open for appropriation by end-users; and the social organization of the practice. We characterize prior work based on the design space elements, which yields trends and opportunities for design. Our contributions can help designers think about key design decisions, exploit trends and explore new areas in the domain of wearables for sports and fitness practices.
4
Bad Breakdowns, Useful Seams, and Face Slapping: Analysis of VR Fails on YouTube
Emily Dao (Monash University, Melbourne, Victoria, Australia)Andreea Muresan (University of Copenhagen, Copenhagen, Denmark)Kasper Hornbæk (University of Copenhagen, Copenhagen, Denmark)Jarrod Knibbe (University of Melbourne, Melbourne, Australia)
Virtual reality (VR) is increasingly used in complex social and physical settings outside of the lab. However, not much is known about how these settings influence use, nor how to design for them. We analyse 233 YouTube videos of VR Fails to: (1) understand when breakdowns occur, and (2) reveal how the seams between VR use and the social and physical setting emerge. The videos show a variety of fails, including users flailing, colliding with surroundings, and hitting spectators. They also suggest causes of the fails, including fear, sensorimotor mismatches, and spectator participation. We use the videos as inspiration to generate design ideas. For example, we discuss more flexible boundaries between the real and virtual world, ways of involving spectators, and interaction designs to help overcome fear. Based on the findings, we further discuss the ‘moment of breakdown’ as an opportunity for designing engaging and enhanced VR experiences.
4
Proxemics and Social Interactions in an Instrumented Virtual Reality Workshop
Julie R.. Williamson (University of Glasgow, Glasgow, United Kingdom)Jie Li (Centrum Wiskunde & Informatica, Amsterdam, Netherlands)David A.. Shamma (Centrum Wiskunde & Informatica, Amsterdam, Netherlands)Vinoba Vinayagamoorthy (BBC Research & Development, London, United Kingdom)Pablo Cesar (CWI, Amsterdam, Netherlands)
Virtual environments (VEs) can create collaborative and social spaces, which are increasingly important in the face of remote work and travel reduction. Recent advances, such as more open and widely available platforms, create new possibilities to observe and analyse interaction in VEs. Using a custom instrumented build of Mozilla Hubs to measure position and orientation, we conducted an academic workshop to facilitate a range of typical workshop activities. We analysed social interactions during a keynote, small group breakouts, and informal networking/hallway conversations. Our mixed-methods approach combined environment logging, observations, and semi-structured interviews. The results demonstrate how small and large spaces influenced group formation, shared attention, and personal space, where smaller rooms facilitated more cohesive groups while larger rooms made small group formation challenging but personal space more flexible. Beyond our findings, we show how the combination of data and insights can fuel collaborative spaces' design and deliver more effective virtual workshops.
4
Mindless Attractor: A False-Positive Resistant Intervention for Drawing Attention Using Auditory Perturbation
Riku Arakawa (The University of Tokyo, Hongo, Japan)Hiromu Yakura (University of Tsukuba, Tsukuba, Japan)
Explicitly alerting users is not always an optimal intervention, especially when they are not motivated to obey. For example, in video-based learning, learners who are distracted from the video would not follow an alert asking them to pay attention. Inspired by the concept of Mindless Computing, we propose a novel intervention approach, Mindless Attractor, that leverages the nature of human speech communication to help learners refocus their attention without relying on their motivation. Specifically, it perturbs the voice in the video to direct their attention without consuming their conscious awareness. Our experiments not only confirmed the validity of the proposed approach but also emphasized its advantages in combination with a machine learning-based sensing module. Namely, it would not frustrate users even though the intervention is activated by false-positive detection of their attentive state. Our intervention approach can be a reliable way to induce behavioral change in human-AI symbiosis.
4
Standardizing Participant Compensation Reporting in HCI: A Meta-Review and Recommendations for the Field
Jessica Pater (Parkview Health, Fort Wayne, Indiana, United States)Amanda Coupe (Parkview Health, Fort Wayne, Indiana, United States)Rachel Pfafman (Parkview Health, Fort Wayne, Indiana, United States)Chanda Phelan (University of Michigan, Ann Arbor, Michigan, United States)Tammy Toscos (Parkview Health, Fort Wayne, Indiana, United States)Maia Jacobs (Northwestern University, Evanston, Illinois, United States)
The user study is a fundamental method used in HCI. In designing user studies, we often use compensation strategies to incentivize recruitment. However, compensation can also lead to ethical issues, such as coercion. The CHI community has yet to establish best practices for participant compensation. Through a systematic review of manuscripts at CHI and other associated publication venues, we found high levels of variation in the compensation strategies used within the community and how we report on this aspect of the study methods. A qualitative analysis of justifications offered for compensation sheds light into how some researchers are currently contextualizing this practice. This paper provides a description of current compensation strategies and information that can inform the design of compensation strategies in future studies. The findings may be helpful to generate productive discourse in the HCI community towards the development of best practices for participant compensation in user studies.
4
Sticky Goals: Understanding Goal Commitments for Behavioral Changes in the Wild
Hyunsoo Lee (KAIST, Daejeon, Korea, Republic of)Auk Kim (Kangwon National Univeristy, Chucheon, Korea, Republic of)Hwajung Hong (Seoul National University, Seoul, Korea, Republic of)Uichin Lee (KAIST, Daejeon, Korea, Republic of)
A commitment device, an attempt to bind oneself for a successful goal achievement, has been used as an effective strategy to promote behavior change. However, little is known about how commitment devices are used in the wild, and what aspects of commitment devices are related to goal achievements. In this paper, we explore a large-scale dataset from stickK, an online behavior change support system that provides both financial and social commitments. We characterize the patterns of behavior change goals (e.g., topics and commitment setting) and then perform a series of multilevel regression analyses on goal achievements. Our results reveal that successful goal achievements are largely dependent on the configuration of financial and social commitment devices, and a mixed commitment setting is considered beneficial. We discuss how our findings could inform the design of effective commitment devices, and how large-scale data can be leveraged to support data-driven goal elicitation and customization.
4
Rethinking Eye-blink: Assessing Task Difficulty through Physiological Representation of Spontaneous Blinking
Youngjun Cho (University College London, London, United Kingdom)
Continuous assessment of task difficulty and mental workload is essential in improving the usability and accessibility of interactive systems. Eye tracking data has often been investigated to achieve this ability, with reports on the limited role of standard blink metrics. Here, we propose a new approach to the analysis of eye-blink responses for automated estimation of task difficulty. The core module is a time-frequency representation of eye-blink, which aims to capture the richness of information reflected on blinking. In our first study, we show that this method significantly improves the sensitivity to task difficulty. We then demonstrate how to form a framework where the represented patterns are analyzed with multi-dimensional Long Short-Term Memory recurrent neural networks for their non-linear mapping onto difficulty-related parameters. This framework outperformed other methods that used hand-engineered features. This approach works with any built-in camera, without requiring specialized devices. We conclude by discussing how Rethinking Eye-blink can benefit real-world applications.