注目の論文一覧

各カテゴリ上位30論文までを表示しています

The ACM CHI Conference on Human Factors in Computing Systems (https://chi2021.acm.org/)

9
MARVIS: Combining Mobile Devices and Augmented Reality for Visual Data Analysis
Ricardo Langner (Technische Universität Dresden, Dresden, Germany)Marc Satkowski (Technische Universität Dresden, Dresden, Germany)Wolfgang Büschel (Technische Universität Dresden, Dresden, Germany)Raimund Dachselt (Technische Universität Dresden, Dresden, Germany)
We present MARVIS, a conceptual framework that combines mobile devices and head-mounted Augmented Reality (AR) for visual data analysis. We propose novel concepts and techniques addressing visualization-specific challenges. By showing additional 2D and 3D information around and above displays, we extend their limited screen space. AR views between displays as well as linking and brushing are also supported, making relationships between separated visualizations plausible. We introduce the design process and rationale for our techniques. To validate MARVIS' concepts and show their versatility and widespread applicability, we describe six implemented example use cases. Finally, we discuss insights from expert hands-on reviews. As a result, we contribute to a better understanding of how the combination of one or more mobile devices with AR can benefit visual data analysis. By exploring this new type of visualization environment, we hope to provide a foundation and inspiration for future mobile data visualizations.
9
Understanding the Design Space of Embodied Passwords based on Muscle Memory
Rosa van Koningsbruggen (Bauhaus-Universität Weimar, Weimar, Germany)Bart Hengeveld (Eindhoven University of Technology, Eindhoven, Netherlands)Jason Alexander (University of Bath, Bath, United Kingdom)
Passwords have become a ubiquitous part of our everyday lives, needed for every web-service and system. However, it is challenging to create safe and diverse alphanumeric passwords, and to recall them, imposing a cognitive burden on the user. Through consecutive experiments, we explored the movement space, affordances and interaction, and memorability of a tangible, handheld, embodied password. In this context, we found that: (1) a movement space of 200 mm × 200 mm is preferred; (2) each context has a perceived level of safety, which—together with the affordances and link to familiarity—influences how the password is performed. Furthermore, the artefact’s dimensions should be balanced within the design itself, with the user, and the context, but there is a trade-off between the perceived safety and ergonomics; and (3) the designed embodied passwords can be recalled for at least a week, with participants creating unique passwords which were reproduced consistently.
8
Data-Driven Mark Orientation for Trend Estimation in Scatterplots
Tingting Liu (School of Computer Science, Qingdao, Shandong, China)Xiaotong Li (School of Computer Science, Qingdao, Shandong, China)Chen Bao (Shandong University, Qingdao, Shandong, China)Michael Correll (Tableau Software, Seattle, Washington, United States)Changehe Tu (Shandong Univ., Qingdao, China)Oliver Deussen (University of Konstanz, Konstanz, Germany)Yunhai Wang (Shandong University, Qingdao, China)
A common task for scatterplots is communicating trends in bivariate data. However, the ability of people to visually estimate these trends is under-explored, especially when the data violate assumptions required for common statistical models, or visual trend estimates are in conflict with statistical ones. In such cases, designers may need to intervene and de-bias these estimations, or otherwise inform viewers about differences between statistical and visual trend estimations. We propose data-driven mark orientation as a solution in such cases, where the directionality of marks in the scatterplot guide participants when visual estimation is otherwise unclear or ambiguous. Through a set of laboratory studies, we investigate trend estimation across a variety of data distributions and mark directionalities, and find that data-driven mark orientation can help resolve ambiguities in visual trend estimates.
8
Vinci: An Intelligent Graphic Design System for Generating Advertising Posters
Shunan Guo (Tongji University, ShangHai, China)Zhuochen Jin (Tongji University, Shanghai, China)Fuling Sun (Tongji University, Shanghai, China)Jingwen Li (Intelligent Big Data Visualization Lab, Tongji University, China, Shanghai, China)Zhaorui Li (Tongji University, Shanghai, China)Yang Shi (Tongji College of Design and Innovation, Shanghai, China)Nan Cao (Tongji College of Design and Innovation, Shanghai, China)
Advertising posters are a commonly used form of information presentation to promote a product. Producing advertising posters often takes much time and effort of designers when confronted with abundant choices of design elements and layouts. This paper presents Vinci, an intelligent system that supports the automatic generation of advertising posters. Given the user-specified product image and taglines, Vinci uses a deep generative model to match the product image with a set of design elements and layouts for generating an aesthetic poster. The system also integrates online editing-feedback that supports users in editing the posters and updating the generated results with their design preference. Through a series of user studies and a Turing test, we found that Vinci can generate posters as good as human designers and that the online editing-feedback improves the efficiency in poster modification.
8
Towards an Understanding of Situated AR Visualization for Basketball Free-Throw Training
Tica Lin (Harvard University, Cambridge, Massachusetts, United States)Rishi Singh (Harvard University, Cambridge, Massachusetts, United States)Yalong Yang (Harvard University, Cambridge, Massachusetts, United States)Carolina Nobre (Harvard University, Cambridge, Massachusetts, United States)Johanna Beyer (Harvard University, Cambridge, Massachusetts, United States)Maurice Smith (Harvard University, Cambridge, Massachusetts, United States)Hanspeter Pfister (Harvard University, Cambridge, Massachusetts, United States)
We present an observational study to compare co-located and situated real-time visualizations in basketball free-throw training. Our goal is to understand the advantages and concerns of applying immersive visualization to real-world skill-based sports training and to provide insights for designing AR sports training systems. We design both a situated 3D visualization on a head-mounted display and a 2D visualization on a co-located display to provide immediate visual feedback on a player's shot performance. Using a within-subject study design with experienced basketball shooters, we characterize user goals, report on qualitative training experiences, and compare the quantitative training results. Our results show that real-time visual feedback helps athletes refine subsequent shots. Shooters in our study achieve greater angle consistency with our visual feedback. Furthermore, AR visualization promotes an increased focus on body form in athletes. Finally, we present suggestions for the design of future sports AR studies.
7
Data@Hand: Fostering Visual Exploration of Personal Data on Smartphones Leveraging Speech and Touch Interaction
Young-Ho Kim (University of Maryland, College Park, Maryland, United States)Bongshin Lee (Microsoft Research, Redmond, Washington, United States)Arjun Srinivasan (Tableau Research, Seattle, Washington, United States)Eun Kyoung Choe (University of Maryland, College Park, Maryland, United States)
Most mobile health apps employ data visualization to help people view their health and activity data, but these apps provide limited support for visual data exploration. Furthermore, despite its huge potential benefits, mobile visualization research in the personal data context is sparse. This work aims to empower people to easily navigate and compare their personal health data on smartphones by enabling flexible time manipulation with speech. We designed and developed Data@Hand, a mobile app that leverages the synergy of two complementary modalities: speech and touch. Through an exploratory study with 13 long-term Fitbit users, we examined how multimodal interaction helps participants explore their own health data. Participants successfully adopted multimodal interaction (i.e., speech and touch) for convenient and fluid data exploration. Based on the quantitative and qualitative findings, we discuss design implications and opportunities with multimodal interaction for better supporting visual data exploration on mobile devices.
7
Improving Viewing Experiences of First-Person Shooter Gameplays with Automatically-Generated Motion Effects
Gyeore Yun (POSTECH, Pohang, Korea, Republic of)Hyoseung Lee (POSTECH, Pohang, Gyeongsangbuk-do, Korea, Republic of)Sangyoon Han (Pohang University of Science and Technology (POSTECH), Pohang, Korea, Republic of)Seungmoon Choi (Pohang University of Science and Technology (POSTECH), Pohang, Gyeongbuk, Korea, Republic of)
In recent times, millions of people enjoy watching video gameplays at an eSports stadium or home. We seek a method that improves gameplay spectator or viewer experiences by presenting multisensory stimuli. Using a motion chair, we provide the motion effects automatically generated from the audiovisual stream to the viewers watching a first-person shooter (FPS) gameplay. The motion effects express the game character’s movement and gunfire action. We describe algorithms for the computation of such motion effects developed using computer vision techniques and deep learning. By a user study, we demonstrate that our method of providing motion effects significantly improves the viewing experiences of FPS gameplay. The contributions of this paper are with the motion synthesis algorithms integrated for FPS games and the empirical evidence for the benefits of experiencing multisensory gameplays.
7
STREAM: Exploring the Combination of Spatially-Aware Tablets with Augmented Reality Head-Mounted Displays for Immersive Analytics
Sebastian Hubenschmid (University of Konstanz, Konstanz, Germany)Johannes Zagermann (University of Konstanz, Konstanz, Germany)Simon Butscher (University of Konstanz, Konstanz, Germany)Harald Reiterer (University of Konstanz, Konstanz, Germany)
Recent research in the area of immersive analytics demonstrated the utility of head-mounted augmented reality devices for visual data analysis. However, it can be challenging to use the by default supported mid-air gestures to interact with visualizations in augmented reality (e.g. due to limited precision). Touch-based interaction (e.g. via mobile devices) can compensate for these drawbacks, but is limited to two-dimensional input. In this work we present STREAM: Spatially-aware Tablets combined with Augmented Reality Head-Mounted Displays for the multimodal interaction with 3D visualizations. We developed a novel eyes-free interaction concept for the seamless transition between the tablet and the augmented reality environment. A user study reveals that participants appreciated the novel interaction concept, indicating the potential for spatially-aware tablets in augmented reality. Based on our findings, we provide design insights to foster the application of spatially-aware touch devices in augmented reality and research implications indicating areas that need further investigation.
7
Stereo-Smell via Electrical Trigeminal Stimulation
Jas Brooks (University of Chicago, Chicago, Illinois, United States)Shan-Yuan Teng (University of Chicago, Chicago, Illinois, United States)Jingxuan Wen (University of Chicago, Chicago, Illinois, United States)Romain Nith (University of Chicago, Chicago, Illinois, United States)Jun Nishida (University of Chicago, Chicago, Illinois, United States)Pedro Lopes (University of Chicago, Chicago, Illinois, United States)
We propose a novel type of olfactory device that creates a stereo-smell experience, i.e., directional information about the location of an odor, by rendering the readings of external odor sensors as trigeminal sensations using electrical stimulation of the user’s nasal septum. The key is that the sensations from the trigeminal nerve, which arise from nerve-endings in the nose, are perceptually fused with those of the olfactory bulb (the brain region that senses smells). As such, we propose that electrically stimulating the trigeminal nerve is an ideal candidate for stereo-smell augmentation/substitution that, unlike other approaches, does not require implanted electrodes in the olfactory bulb. To realize this, we engineered a self-contained device that users wear across their nasal septum. Our device outputs by stimulating the user’s trigeminal nerve using electrical impulses with variable pulse-widths; and it inputs by sensing the user’s inhalations using a photoreflector. It measures 10x23 mm and communicates with external gas sensors using Bluetooth. In our user study, we found the key electrical waveform parameters that enable users to feel an odor’s intensity (absolute electric charge) and direction (phase order and net charge). In our second study, we demonstrated that participants were able to localize a virtual smell source in the room by using our prototype without any previous training. Using these insights, our device enables expressive trigeminal sensations and could function as an assistive device for people with anosmia, who are unable to smell.
7
Teardrop Glasses: Pseudo Tears Induce Sadness in You and Those Around You
Shigeo Yoshida (The University of Tokyo, Tokyo, Japan)Takuji Narumi (the University of Tokyo, Tokyo, Japan)Tomohiro Tanikawa (the University of Tokyo, Tokyo, Japan)Hideaki Kuzuoka (The University of Tokyo, Bunkyo-ku, Tokyo, Japan)Michitaka Hirose (The University of Tokyo, Tokyo, Japan)
Emotional contagion is a phenomenon in which one's emotions are transmitted among individuals unconsciously by observing others' emotional expressions. In this paper, we propose a method for mediating people's emotions by triggering emotional contagion through artificial bodily changes such as pseudo tears. We focused on shedding tears because of the link to several emotions besides sadness. In addition, it is expected that shedding tears would induce emotional contagion because it is observable by others. We designed an eyeglasses-style wearable device, Teardrop glasses, that release water drops near the wearer's eyes. The drops flow down the cheeks and emulate real tears. The study revealed that artificial crying with pseudo tears increased sadness among both wearers and those observing them. Moreover, artificial crying attenuated happiness and positive feelings in observers. Our findings show that actual bodily changes are not necessary for inducing emotional contagion as artificial bodily changes are also sufficient.
6
Quantitative Data Visualisation on Virtual Globes
Kadek Ananta. Satriadi (Monash University, Melbourne, Australia)Barrett Ens (Monash University, Melbourne, Australia)Tobias Czauderna (Monash University, Victoria, Australia)Maxime Cordeil (Monash University, Melbourne, Australia)Bernhard Jenny (Monash University, Melbourne, Australia)
Geographic data visualisation on virtual globes is intuitive and widespread, but has not been thoroughly investigated. We explore two main design factors for quantitative data visualisation on virtual globes: i)~commonly used primitives (\textit{2D bar}, \textit{3D bar}, \textit{circle}) and ii)~the orientation of these primitives (\textit{tangential}, \textit{normal}, \textit{billboarded)}. We evaluate five distinctive visualisation idioms in a user study with 50 participants. The results show that aligning primitives tangentially on the globe’s surface decreases the accuracy of area-proportional circle visualisations, while the orientation does not have a significant effect on the accuracy of length-proportional bar visualisations. We also find that tangential primitives induce higher perceived mental load than other orientations. Guided by these results we design a novel globe visualisation idiom, \textit{Geoburst}, that combines a virtual globe and a radial bar chart. A preliminary evaluation reports potential benefits and drawbacks of the \textit{Geoburst} visualisation.
6
Investigating the Homogenization of Web Design: A Mixed-Methods Approach
Samuel Goree (Indiana University, Bloomington, Indiana, United States)Bardia Doosti (Indiana University Bloomington, Bloomington, Indiana, United States)David Crandall (Indiana University, Bloomington, Indiana, United States)Norman Makoto. Su (Indiana University, Bloomington, Indiana, United States)
Visual design provides the backdrop to most of our interactions over the Internet, but has not received as much analytical attention as textual content. Combining computational with qualitative approaches, we investigate the growing concern that visual design of the World Wide Web has homogenized over the past decade. By applying computer vision techniques to a large data-set of representative websites images from 2003--2019, we show that designs have become significantly more similar since 2007, especially for page layouts where the average distance between sites decreased by over 30%. Synthesizing interviews from 11 experienced web design professionals with our computational analyses, we discuss causes of this homogenization including overlap in source code and libraries, color scheme standardization, and support for mobile devices. Our results seek to motivate future discussion of the factors that influence designers and their implications on the future trajectory of web design.
6
MetaMap: Supporting Visual Metaphor Ideation through Multi-dimensional Example-based Exploration
Youwen Kang (Hong Kong University of Science and Technology, Hong Kong, Hong Kong, China)Zhida Sun (Hong Kong University of Science and Technology, Hong Kong, China)Sitong Wang (Columbia University, New York, New York, United States)Zeyu Huang (Hong Kong University of Science and Technology, Hong Kong, China)Ziming Wu (Hong Kong University of Science and Technology, Hong Kong, China)Xiaojuan Ma (Hong Kong University of Science and Technology, Hong Kong, Hong Kong)
Visual metaphors, which are widely used in graphic design, can deliver messages in creative ways by fusing different objects. The keys to creating visual metaphors are diverse exploration and creative combinations, which is challenging with conventional methods like image searching. To streamline this ideation process, we propose to use a mind-map-like structure to recommend and assist users to explore materials. We present MetaMap, a supporting tool which inspires visual metaphor ideation through multi-dimensional example-based exploration. To facilitate the divergence and convergence of the ideation process, MetaMap provides 1) sample images based on keyword association and color filtering; 2) example-based exploration in semantics, color, and shape dimensions; and 3) thinking path tracking and idea recording. We conduct a within-subject study with 24 design enthusiasts by taking a Pinterest-like interface as the baseline. Our evaluation results suggest that MetaMap provides an engaging ideation process and helps participants create diverse and creative ideas.
6
IdeaBot: Investigating Social Facilitation in Human-Machine Team Creativity
Angel Hsing-Chi Hwang (Cornell University, Ithaca, New York, United States)Andrea Stevenson Won (Cornell University, Ithaca, New York, United States)
The present study investigates how human subjects collaborate with a computer-mediated chatbot in creative idea generation tasks. In three text-based between-group studies, we tested whether the perceived identity (i.e.,whether the bot is perceived as a machine or as a human) or the conversational style of a teammate would moderate the outcomes of participants’ creative production. In Study 1, participants worked with either a chatbot or a human confederate. In Study 2, all participants worked with a human teammate but were informed that their partner was either a human or a chatbot. Conversely, all participants worked with a chatbot in Study 3, but were told the identity of their partner was either a chatbot or a human. We investigated differences in idea generation outcomes and found that participants consistently contributed more ideas and with ideas of higher quality when they perceived their teamworking partner as a bot. Furthermore, when the conversational style of the partner was robotic, participants with high anxiety in group communication reported greater creative self-efficacy in task performance. Finally, whether the perceived dominance of a partner and the pressure to come up with ideas during the task mediated positive outcomes of idea generation also depends on whether the conversational style of the bot partner was robot- or human-like. Based on our findings, we discussed implications for future design of artificial agents as active team players in collaboration tasks.
6
Unmaking: Enabling and Celebrating the Creative Material of Failure, Destruction, Decay, and Deformation
Katherine W. Song (UC Berkeley, Berkeley, California, United States)Eric Paulos (UC Berkeley, Berkeley, California, United States)
The access and growing ubiquity of digital fabrication has ushered in a celebration of creativity and ``making.'' However, the focus is often on the resulting static artifact or the creative process and tools to design it. We envision a post-making process that extends past these final static objects --- not just in their making but in their ``unmaking.'' By drawing from artistic movements such as Auto-Destructive Art, intentionally inverting well-established engineering principles of structurally sound designs, and safely misusing unstable materials, we demonstrate an important extension to making --- unmaking. In this paper, we provide designers with a new vocabulary of unmaking operations within standard 3D modeling tools. We demonstrate how such designs can be realized using a novel multi-material 3D printing process. Finally, we detail how unmaking allows designs to change over time, is an ally to sustainability and re-usability, and captures themes of ``aura,'' emotionality, and personalization.
5
When the Social Becomes Non-Human: A Study of Young People’s Perception of Social Support in Chatbots
Petter Bae Brandtzæg (University of Oslo, Oslo, Norway)Marita Skjuve (SINTEF Digital, Oslo, Norway)Kim Kristoffer Dysthe (University of Oslo, Oslo, Norway)Asbjørn Følstad (SINTEF, Oslo, Norway)
Although social support is important for health and well-being, many young people are hesitant to reach out for support. The emerging uptake of chatbots for social and emotional purposes entails opportunities and concerns regarding non-human agents as sources of social support. To explore this, we invited 16 participants (16–21 years) to use and reflect on chatbots as sources of social support. Our participants first interacted with a chatbot for mental health (Woebot) for two weeks. Next, they participated in individual in-depth interviews. As part of the interview session, they were presented with a chatbot prototype providing information to young people. Two months later, the participants reported on their continued use of Woebot. Our findings provide in-depth knowledge about how young people may experience various types of social support—appraisal, informational, emotional, and instrumental support—from chatbots. We summarize implications for theory, practice, and future research.
5
Assessing Social Anxiety Through Digital Biomarkers Embedded in a Gaming Task
Martin Johannes. Dechant (University of Saskatchewan, Saskatoon, Saskatchewan, Canada)Julian Frommel (University of Saskatchewan, Saskatoon, Saskatchewan, Canada)Regan L. Mandryk (University of Saskatchewan, Saskatoon, Saskatchewan, Canada)
Digital biomarkers of mental health issues offer many advantages, including timely identification for early intervention, ongoing assessment during treatment, and reducing barriers to assessment stemming from geography, age, fear, or disparities in access to systems of care. Embedding digital biomarkers into games may further increase the reach of digital assessment. In this study, we explore game-based digital biomarkers for social anxiety, based on interaction with a non-player character (NPC). We show that social anxiety affects a player’s accuracy and their movement path in a gaming task involving an NPC. Further, we compared first versus third-person camera perspectives and the use of customized versus predefined avatars to explore the influence of common game interface factors on the expression of social anxiety through in-game movements. Our findings provide new insights about how game-based digital biomarkers can be effectively used for social anxiety, affording the benefits of early and ongoing digital assessment.
5
HairTouch: Providing Stiffness, Roughness and Surface Height Differences Using Reconfigurable Brush Hairs on a VR Controller
Chi-Jung Lee (National Taiwan University, Taipei, Taiwan)Hsin-Ruey Tsai (National Chengchi University, Taipei, Taiwan)Bing-Yu Chen (National Taiwan University, Taipei, Taiwan)
Tactile feedback is widely used to enhance realism in virtual reality (VR). When touching virtual objects, stiffness and roughness are common and obvious factors perceived by the users. Furthermore, when touching a surface with complicated surface structure, differences from not only stiffness and roughness but also surface height are crucial. To integrate these factors, we propose a pin-based handheld device, HairTouch, to provide stiffness differences, roughness differences, surface height differences and their combinations. HairTouch consists of two pins for the two finger segments close to the index fingertip, respectively. By controlling brush hairs' length and bending direction to change the hairs' elasticity and hair tip direction, each pin renders various stiffness and roughness, respectively. By further independently controlling the hairs' configuration and pins' height, versatile stiffness, roughness and surface height differences are achieved. We conducted a perception study to realize users' distinguishability of stiffness and roughness on each of the segments. Based on the results, we performed a VR experience study to verify that the tactile feedback from HairTouch enhances VR realism.
5
Datamations: Animated Explanations of Data Analysis Pipelines
Xiaoying Pu (University of Michigan, Ann Arbor, Michigan, United States)Sean Kross (The University of California San Diego, La Jolla, California, United States)Jake M. Hofman (Microsoft Research, NYC, New York, United States)Daniel G. Goldstein (Microsoft Research, New York, New York, United States)
Plots and tables are commonplace in today's data-driven world, and much research has been done on how to make these figures easy to read and understand. Often times, however, the information they contain conveys only the end result of a complex and subtle data analysis pipeline. This can leave the reader struggling to understand what steps were taken to arrive at a figure, and what implications this has for the underlying results. In this paper, we introduce datamations, which are animations designed to explain the steps that led to a given plot or table. We present the motivation and concept behind datamations, discuss how to programmatically generate them, and provide the results of two large-scale randomized experiments investigating how datamations affect people's abilities to understand potentially puzzling results compared to seeing only final plots and tables containing those results.
5
Soloist: Generating Mixed-Initiative Tutorials from Existing Guitar Instructional Videos Through Audio Processing
Bryan Wang (University of Toronto, Toronto, Ontario, Canada)Mengyu Yang (University of Toronto, Toronto, Ontario, Canada)Tovi Grossman (University of Toronto, Toronto, Ontario, Canada)
Learning musical instruments using online instructional videos has become increasingly prevalent. However, pre-recorded videos lack the instantaneous feedback and personal tailoring that human tutors provide. In addition, existing video navigations are not optimized for instrument learning, making the learning experience encumbered. Guided by our formative interviews with guitar players and prior literature, we designed Soloist, a mixed-initiative learning framework that automatically generates customizable curriculums from off-the-shelf guitar video lessons. Soloist takes raw videos as input and leverages deep-learning based audio processing to extract musical information. This back-end processing is used to provide an interactive visualization to support effective video navigation and real-time feedback on the user’s performance, creating a guided learning experience. We demonstrate the capabilities and specific use-cases of Soloist within the domain of learning electric guitar solos using instructional YouTube videos. A remote user study, conducted to gather feedback from guitar players, shows encouraging results as the users unanimously preferred learning with Soloist over unconverted instructional videos.
5
TexYZ: Embroidering Enameled Wires for Three Degree-of-Freedom Mutual Capacitive Sensing
Roland Aigner (University of Applied Sciences Upper Austria, Hagenberg, Austria)Andreas Pointner (University of Applied Sciences Upper Austria, Hagenberg, Austria)Thomas Preindl (University of Applied Sciences Upper Austria, Hagenberg, Austria)Rainer Danner (University of Applied Sciences Upper Austria, Hagenberg, Austria)Michael Haller (University of Applied Sciences Upper Austria, Hagenberg, Austria)
In this paper, we present TexYZ, a method for rapid and effortless manufacturing of textile mutual capacitive sensors using a commodity embroidery machine. We use enameled wire as a bobbin thread to yield textile capacitors with high quality and consistency. As a consequence, we are able to leverage the precision and expressiveness of projected mutual capacitance for textile electronics, even when size is limited. Harnessing the assets of machine embroidery, we implement and analyze five distinct electrode patterns, examine the resulting electrical features with respect to geometrical attributes, and demonstrate the feasibility of two promising candidates for small-scale matrix layouts. The resulting sensor patches are further evaluated in terms of capacitance homogeneity, signal-to-noise ratio, sensing range, and washability. Finally, we demonstrate two use case scenarios, primarily focusing on continuous input with up to three degrees-of-freedom.
5
GestureMap: Supporting Visual Analytics and Quantitative Analysis of Motion Elicitation Data by Learning 2D Embeddings
Hai Duong. Dang (University of Bayreuth, Bayreuth, Germany)Daniel Buschek (University of Bayreuth, Bayreuth, Germany)
This paper presents GestureMap, a visual analytics tool for gesture elicitation which directly visualises the space of gestures. Concretely, a Variational Autoencoder embeds gestures recorded as 3D skeletons on an interactive 2D map. GestureMap further integrates three computational capabilities to connect exploration to quantitative measures: Leveraging DTW Barycenter Averaging (DBA), we compute average gestures to 1) represent gesture groups at a glance; 2) compute a new consensus measure (variance around average gesture); and 3) cluster gestures with k-means. We evaluate GestureMap and its concepts with eight experts and an in-depth analysis of published data. Our findings show how GestureMap facilitates exploring large datasets and helps researchers to gain a visual understanding of elicited gesture spaces. It further opens new directions, such as comparing elicitations across studies. We discuss implications for elicitation studies and research, and opportunities to extend our approach to additional tasks in gesture elicitation.
5
Collecting and Characterizing Natural Language Utterances for Specifying Data Visualizations
Arjun Srinivasan (Tableau Research, Seattle, Washington, United States)Nikhila Nyapathy (Georgia Institute of Technology, Atlanta, Georgia, United States)Bongshin Lee (Microsoft Research, Redmond, Washington, United States)Steven M.. Drucker (Microsoft Research, Redmond, Washington, United States)John Stasko (Georgia Institute of Technology, Atlanta, Georgia, United States)
Natural language interfaces (NLIs) for data visualization are becoming increasingly popular both in academic research and in commercial software. Yet, there is a lack of empirical understanding of how people specify visualizations through natural language. We conducted an online study (N = 102), showing participants a series of visualizations and asking them to provide utterances they would pose to generate the displayed charts. From the responses, we curated a dataset of 893 utterances and characterized the utterances according to (1) their phrasing (e.g., commands, queries, questions) and (2) the information they contained (e.g., chart types, data aggregations). To help guide future research and development, we contribute this utterance dataset and discuss its applications toward the creation and benchmarking of NLIs for visualization.
5
Visuo-haptic Illusions for Linear Translation and Stretching using Physical Proxies in Virtual Reality
Martin Feick (Saarland Informatics Campus, Saarbrücken, Germany)Niko Kleer (Saarland Informatics Campus, Saarbrücken, Germany)André Zenner (Saarland University, Saarland Informatics Campus, Saarbrücken, Germany)Anthony Tang (University of Toronto, Toronto, Ontario, Canada)Antonio Krüger (DFKI, Saarland Informatics Campus, Saarbrücken, Germany)
Providing haptic feedback when manipulating virtual objects is an essential part of immersive virtual reality experiences; however, it is challenging to replicate all of an object’s properties and characteristics. We propose the use of visuo-haptic illusions alongside physical proxies to enhance the scope of proxy-based interactions with virtual objects. In this work, we focus on two manipulation techniques, linear translation and stretching across different distances, and investigate how much discrepancy between the physical proxy and the virtual object may be introduced without participants noticing. In a study with 24 participants, we found that manipulation technique and travel distance significantly affect the detection thresholds, and that visuo-haptic illusions impact performance and accuracy. We show that this technique can be used to enable functional proxy objects that act as stand-ins for multiple virtual objects, illustrating the technique through a showcase VR-DJ application.
5
Pose-on-the-Go: Approximating User Pose with Smartphone Sensor Fusion and Inverse Kinematics
Karan Ahuja (Carnegie Mellon University, Pittsburgh, Pennsylvania, United States)Sven Mayer (Carnegie Mellon University, Pittsburgh, Pennsylvania, United States)Mayank Goel (Carnegie Mellon University, Pittsburgh, Pennsylvania, United States)Chris Harrison (Carnegie Mellon University, Pittsburgh, Pennsylvania, United States)
We present Pose-on-the-Go, a full-body pose estimation system that uses sensors already found in today’s smartphones. This stands in contrast to prior systems, which require worn or external sensors. We achieve this result via extensive sensor fusion, leveraging a phone’s front and rear cameras, the user-facing depth camera, touchscreen, and IMU. Even still, we are missing data about a user’s body (e.g., angle of the elbow joint), and so we use inverse kinematics to estimate and animate probable body poses. We provide a detailed evaluation of our system, benchmarking it against a professional-grade Vicon tracking system. We conclude with a series of demonstration applications that underscore the unique potential of our approach, which could be enabled on many modern smartphones with a simple software update.
5
RubySlippers: Supporting Content-based Voice Navigation for How-to Videos
Minsuk Chang (KAIST, Daejeon, Korea, Republic of)Mina Huh (KAIST, Daejeon, Korea, Republic of)Juho Kim (KAIST, Daejeon, Korea, Republic of)
Directly manipulating the timeline, such as scrubbing for thumbnails, is the standard way of controlling how-to videos. However, when how-to videos involve physical activities, people inconveniently alternate between controlling the video and performing the tasks. Adopting a voice user interface allows people to control the video with voice while performing the tasks with hands. However, naively translating timeline manipulation into voice user interfaces (VUI) results in temporal referencing (e.g. ``rewind 20 seconds''), which requires a different mental model for navigation and thereby limiting users' ability to peek into the content. We present RubySlippers, a system that supports efficient content-based voice navigation through keyword-based queries. Our computational pipeline automatically detects referenceable elements in the video, and finds the video segmentation that minimizes the number of needed navigational commands. Our evaluation (N=12) shows that participants could perform three representative navigation tasks with fewer commands and less frustration using RubySlippers than the conventional voice-enabled video interface.
5
Sketchnote Components, Design Space Dimensions, and Strategies for Effective Visual Note Taking
Rebecca Zheng (University College London, London, United Kingdom)Marina Fernández Camporro (University College London, London, United Kingdom)Hugo Romat (ETH, Zurich, Switzerland)Nathalie Henry Riche (Microsoft Research, Redmond, Washington, United States)Benjamin Bach (Edinburgh University, Edinburgh, United Kingdom)Fanny Chevalier (University of Toronto, Toronto, Ontario, Canada)Ken Hinckley (Microsoft Research, Redmond, Washington, United States)Nicolai Marquardt (University College London, London, United Kingdom)
Sketchnoting is a form of visual note taking where people listen to, synthesize, and visualize ideas from a talk or other event using a combination of pictures, diagrams, and text. Little is known about the design space of this kind of visual note taking. With an eye towards informing the implementation of digital equivalents of sketchnoting, inking, and note taking, we introduce a classification of sketchnote styles and techniques, with a qualitative analysis of 103 sketchnotes, and situated in context with six semi-structured follow up interviews. Our findings distill core sketchnote components (content, layout, structuring elements, and visual styling) and dimensions of the sketchnote design space, classifying levels of conciseness, illustration, structure, personification, cohesion, and craftsmanship. We unpack strategies to address particular note taking challenges, for example dealing with constraints of live drawings, and discuss relevance for future digital inking tools, such as recomposition, styling, and design suggestions.
5
MIRIA: A Mixed Reality Toolkit for the In-Situ Visualization and Analysis of Spatio-Temporal Interaction Data
Wolfgang Büschel (Technische Universität Dresden, Dresden, Germany)Anke Lehmann (Technische Universität Dresden, Dresden, Germany)Raimund Dachselt (Technische Universität Dresden, Dresden, Germany)
In this paper, we present MIRIA, a Mixed Reality Interaction Analysis toolkit designed to support the in-situ visual analysis of user interaction in mixed reality and multi-display environments. So far, there are few options to effectively explore and analyze interaction patterns in such novel computing systems. With MIRIA, we address this gap by supporting the analysis of user movement, spatial interaction, and event data by multiple, co-located users directly in the original environment. Based on our own experiences and an analysis of the typical data, tasks, and visualizations used in existing approaches, we identify requirements for our system. We report on the design and prototypical implementation of MIRIA, which is informed by these requirements and offers various visualizations such as 3D movement trajectories, position heatmaps, and scatterplots. To demonstrate the value of MIRIA for real-world analysis tasks, we conducted expert feedback sessions using several use cases with authentic study data.
5
How WEIRD is CHI?
Sebastian Linxen (University of Basel, Basel, Switzerland)Christian Sturm (Hamm-Lippstadt University of Applied Sciences, Lippstadt, North Rhine Westphalia, Germany)Florian Brühlmann (University of Basel, Basel, Switzerland)Vincent Cassau (Hochschule Hamm- Lippstadt, Lippstadt, North- Rhine- Westphalia, Germany)Klaus Opwis (University of Basel, Basel, Switzerland)Katharina Reinecke (University of Washington, Seattle, Washington, United States)
Computer technology is often designed in technology hubs in Western countries, invariably making it "WEIRD", because it is based on the intuition, knowledge, and values of people who are Western, Educated, Industrialized, Rich, and Democratic. Developing technology that is universally useful and engaging requires knowledge about members of WEIRD and non-WEIRD societies alike. In other words, it requires us, the CHI community, to generate this knowledge by studying representative participant samples. To find out to what extent CHI participant samples are from Western societies, we analyzed papers published in the CHI proceedings between 2016-2020. Our findings show that 73% of CHI study findings are based on Western participant samples, representing less than 12% of the world's population. Furthermore, we show that most participant samples at CHI tend to come from industrialized, rich, and democratic countries with generally highly educated populations. Encouragingly, recent years have seen a slight increase in non-Western samples and those that include several countries. We discuss suggestions for further broadening the international representation of CHI participant samples.
5
GuideCopter - A Precise Drone-Based Haptic Guidance Interface for Blind or Visually Impaired People
Felix Huppert (University of Passau, Passau, Bavaria, Germany)Gerold Hoelzl (University of Passau, Passau, Bavaria, Germany)Matthias Kranz (University of Passau, Passau, Bavaria, Germany)
Drone assisted navigation aids for supporting walking activities of visually impaired have been established in related work but fine-point object grasping tasks and the object localization in unknown environments still presents an open and complex challenge. We present a drone-based interface that provides fine-grain haptic feedback and thus physically guides them in hand-object localization tasks in unknown surroundings. Our research is built around community groups of blind or visually impaired (BVI) people, which provide in-depth insights during the development process and serve later as study participants. A pilot study infers users' sensibility to applied guiding stimuli forces and the different human-drone tether interfacing possibilities. In a comparative follow-up study, we show that our drone-based approach achieves greater accuracy compared to a current audio-based hand guiding system and delivers overall a more intuitive and relatable fine-point guiding experience.
5
UMLAUT: Debugging Deep Learning Programs using Program Structure and Model Behavior
Eldon Schoop (University of California, Berkeley, Berkeley, California, United States)Forrest Huang (University of California, Berkeley, Berkeley, California, United States)Bjoern Hartmann (UC Berkeley, Berkeley, California, United States)
Training deep neural networks can generate non-descriptive error messages or produce unusual output without any explicit errors at all. While experts rely on tacit knowledge to apply debugging strategies, non-experts lack the experience required to interpret model output and correct Deep Learning (DL) programs. In this work, we identify DL debugging heuristics and strategies used by experts, and use them to guide the design of Umlaut. Umlaut checks DL program structure and model behavior against these heuristics; provides human-readable error messages to users; and annotates erroneous model output to facilitate error correction. Umlaut links code, model output, and tutorial-driven error messages in a single interface. We evaluated Umlaut in a study with 15 participants to determine its effectiveness in helping developers find and fix errors in their DL programs. Participants using Umlaut found and fixed significantly more bugs compared to a baseline condition.
5
Hidden Interaction Techniques: Concealed Information Acquisition and Texting on Smartphones and Wearables
Ville Mäkelä (LMU Munich, Munich, Germany)Johannes Kleine (LMU Munich, Munich, Germany)Maxine Hood (Wellesley College, Wellesley, Massachusetts, United States)Florian Alt (Bundeswehr University Munich, Munich, Germany)Albrecht Schmidt (LMU Munich, Munich, Germany)
There are many situations where using personal devices is not socially acceptable, or where nearby people present a privacy risk. For these situations, we explore the concept of hidden interaction techniques through two prototype applications. HiddenHaptics allows users to receive information through vibrotactile cues on a smartphone, and HideWrite allows users to write text messages by drawing on a dimmed smartwatch screen. We conducted three user studies to investigate whether, and how, these techniques can be used without being exposed. Our primary findings are (1) users can effectively hide their interactions while attending to a social situation, (2) users seek to interact when another person is speaking, and they also tend to hide the interaction using their body or furniture, and (3) users can sufficiently focus on the social situation despite their interaction, whereas non-users feel that observing the user hinders their ability to focus on the social activity.
5
Haptic and Visual Comprehension of a 2D Graph Layout Through Physicalisation
Adam Drogemuller (University of South Australia, Mawson Lakes, South Australia, Australia)Andrew Cunningham (University of South Australia, Adelaide, Australia)James A. Walsh (University of South Australia, Mawson Lakes, South Australia, Australia)James Baumeister (University of South Australia, Adelaide, South Australia, Australia)Ross T. Smith (University of South Australia, Adelaide, Australia)Bruce H. Thomas (University of South Australia, Mawson Lakes, South Australia, Australia)
Data physicalisations afford people the ability to directly interact with data using their hands, potentially achieving a more comprehensive understanding of a dataset. Due to their complex nature, the representation of graphs and networks could benefit from physicalisation, bringing the dataset from the digital world into the physical one. However, no empirical work exists investigating the effects physicalisations have upon comprehension as they relate to graph representations. In this work, we present initial design considerations for graph physicalisations, as well as an empirical study investigating differences in comprehension between virtual and physical representations. We found that participants perceived themselves as being more accurate via touch and sight (visual-haptic) than the graphical-only modality, and perceived a triangle count task as less difficult in visual-haptic than in the graphical-only modality. Additionally, we found that participants significantly preferred interacting with visual-haptic over other conditions, despite no significant effect on task time or error.
5
Scene-Aware Behavior Synthesis for Virtual Pets in Mixed Reality
Wei Liang (Beijing Institute of Technology, Beijing, China)Xinzhe Yu (Beijing Institute of Technology, Beijing, China)Rawan Alghofaili (George Mason University, Fairfax, Virginia, United States)Yining Lang (Alibaba Group, beijing, China)Lap-Fai Yu (George Mason University, Fairfax, Virginia, United States)
Virtual pets are an alternative to real pets, providing a substitute for people with allergies or preparing people for adopting a real pet. Recent advancements in mixed reality pave the way for virtual pets to provide a more natural and seamless experience for users. However, one key challenge is embedding environmental awareness into the virtual pet (e.g., identifying the food bowl's location) so that they can behave naturally in the real world. We propose a novel approach to synthesize virtual pet behaviors by considering scene semantics, enabling a virtual pet to behave naturally in mixed reality. Given a scene captured from the real world, our approach synthesizes a sequence of pet behaviors (e.g., resting after eating). Then, we assign each behavior in the sequence to a location in the real scene. We conducted user studies to evaluate our approach, which showed the efficacy of our approach in synthesizing natural virtual pet behaviors.
5
Touch&Fold: A Foldable Haptic Actuator for Rendering Touch in Mixed Reality
Shan-Yuan Teng (University of Chicago, Chicago, Illinois, United States)Pengyu Li (University of Chicago, Chicago, Illinois, United States)Romain Nith (University of Chicago, Chicago, Illinois, United States)Joshua Fonseca (University of Chicago, Chicago, Illinois, United States)Pedro Lopes (University of Chicago, Chicago, Illinois, United States)
We propose a nail-mounted foldable haptic device that provides tactile feedback to mixed reality (MR) environments by pressing against the user’s fingerpad when a user touches a virtual object. What is novel in our device is that it quickly tucks away when the user interacts with real-world objects. Its design allows it to fold back on top of the user’s nail when not in use, keeping the user’s fingerpad free to, for instance, manipulate handheld tools and other objects while in MR. To achieve this, we engineered a wireless and self-contained haptic device, which measures 24×24×41 mm and weighs 9.5 g. Furthermore, our foldable end-effector also features a linear resonant actuator, allowing it to render not only touch contacts (i.e., pressure) but also textures (i.e., vibrations). We demonstrate how our device renders contacts with MR surfaces, buttons, low- and high-frequency textures. In our first user study, we found that participants perceived our device to be more realistic than a previous haptic device that also leaves the fingerpad free (i.e., fingernail vibration). In our second user study, we investigated the participants’ experience while using our device in a real-world task that involved physical objects. We found that our device allowed participants to use the same finger to manipulate handheld tools, small objects, and even feel textures and liquids, without much hindrance to their dexterity, while feeling haptic feedback when touching MR interfaces.
5
Oh, Snap! A Fabrication Pipeline to Magnetically Connect Conventional and 3D-Printed Electronics
Martin Schmitz (Technical University of Darmstadt, Darmstadt, Germany)Jan Riemann (Technical University of Darmstadt, Darmstadt, Germany)Florian Müller (TU Darmstadt, Darmstadt, Germany)Steffen Kreis (TU Darmstadt, Darmstadt, Germany)Max Mühlhäuser (TU Darmstadt, Darmstadt, Germany)
3D printing has revolutionized rapid prototyping by speeding up the creation of custom-shaped objects. With the rise of multi-material 3Dprinters, these custom-shaped objects can now be made interactive in a single pass through passive conductive structures. However, connecting conventional electronics to these conductive structures often still requires time-consuming manual assembly involving many wires, soldering or gluing. To alleviate these shortcomings, we propose Oh, Snap!: a fabrication pipeline and interfacing concept to magnetically connect a 3D-printed object equipped with passive sensing structures to conventional sensing electronics. To this end, Oh, Snap! utilizes ferromagnetic and conductive 3D-printed structures, printable in a single pass on standard printers. We further present a proof-of-concept capacitive sensing board that enables easy and robust magnetic assembly to quickly create interactive 3D-printed objects. We evaluate Oh, Snap! by assessing the robustness and quality of the connection and demonstrate its broad applicability by a series of example applications.
5
Prepare for Trouble and Make It Double: The Power Motive Predicts Pokémon Choices Based on Apparent Strength
Susanne Poeller (University of Trier, Trier, Germany)Karla Waldenmeier (University of Trier, Trier, Germany)Nicola Baumann (University of Trier, Trier, Germany)Regan L. Mandryk (University of Saskatchewan, Saskatoon, Saskatchewan, Canada)
Two social motives are distinguished by Motive Disposition Theory: affiliation and power. Motives orient, select and energize our behaviour, suggesting that the choices of power-motivated individuals should be guided by power cues, such as the appearance of strength in a game character or avatar. In study 1 we demonstrate that participants were more likely to pick strong-looking Pokémon for a fight and cute Pokémon as a companion. In addition, we show that even when considering these contexts, the power motive predicts preferences for a powerful appearance, whereas affiliation does not. In study 2 we replicate the study 1 findings and distinguish between two ways to enact the power motive (prosocial and dominant power). We demonstrate that the dominance, but not the prosociality, facet drives the preference for strong-looking Pokémon. Our findings suggest that the need to influence others—the power motive—drives the choice for battle companions who symbolize strength.
5
VisiFit: Structuring Iterative Improvement for Novice Designers
Lydia B. Chilton (Columbia University, New York, New York, United States)Ecenaz Jen. Ozmen (Columbia University, New York, New York, United States)Sam H. Ross (Barnard College, New York, New York, United States)Vivian Liu (Columbia University, New York, New York, United States)
Visual blends are an advanced graphic design technique to seamlessly integrate two objects into one. Existing tools help novices create prototypes of blends, but it is unclear how they would improve them to be higher fidelity. To help novices, we aim to add structure to the iterative improvement process. We introduce a method for improving prototypes that uses secondary design dimensions to explore a structured design space. This method is grounded in the cognitive principles of human visual object recognition. We present VisiFit – a computational design system that uses this method to enable novice graphic designers to improve blends with computationally generated options they can select, adjust, and chain together. Our evaluation shows novices can substantially improve 76% of blends in under 4 minutes. We discuss how the method can be generalized to other blending problems, and how computational tools can support novices by enabling them to explore a structured design space quickly and efficiently.
5
More Kawaii than a Real-Person Streamer: Understanding How the Otaku Community Engages with and Perceives Virtual YouTubers
Zhicong Lu (City University of Hong Kong, Hong Kong, China)Chenxinran Shen (University of Toronto, Toronto, Ontario, Canada)Jiannan Li (University of Toronto, Toronto, Ontario, Canada)Hong Shen (Carnegie Mellon University , Pittsburgh, Pennsylvania, United States)Daniel Wigdor (University of Toronto, Toronto, Ontario, Canada)
Live streaming has become increasingly popular, with most streamers presenting their real-life appearance. However, Virtual YouTubers (VTubers), virtual 2D or 3D avatars that are voiced by humans, are emerging as live streamers and attracting a growing viewership in East Asia. Although prior research has found that many viewers seek real-life interpersonal interactions with real-person streamers, it is currently unknown what makes VTuber live streams engaging or how they are perceived differently than real-person streamers. We conducted an interview study to understand how viewers engage with VTubers and perceive the identities of the voice actors behind the avatars (i.e., Nakanohito). The data revealed that Virtual avatars bring unique performative opportunities which result in different viewer expectations and interpretations of VTuber behavior. Viewers intentionally upheld the disembodiment of VTuber avatars from their voice actors. We uncover the nuances in viewer perceptions and attitudes and further discuss the implications of VTuber practices to the understanding of live streaming in general.
5
Can You Hear My Heartbeat?: Hearing an Expressive Biosignal Elicits Empathy
R. Michael Winters (Georgia Institute of Technology, Atlanta, Georgia, United States)Bruce N.. Walker (Georgia Institute of Technology , Atlanta, Georgia, United States)Grace Leslie (Georgia Tech, Atlanta, Georgia, United States)
Interfaces designed to elicit empathy provide an opportunity for HCI with important pro-social outcomes. Recent research has demonstrated that perceiving expressive biosignals can facilitate emotional understanding and connection with others, but this work has been largely limited to visual approaches. We propose that hearing these signals will also elicit empathy, and test this hypothesis with sounding heartbeats. In a lab-based within-subjects study, participants (N=27) completed an emotion recognition task in different heartbeat conditions. We found that hearing heartbeats changed participants’ emotional perspective and increased their reported ability to “feel what the other was feeling.” From these results, we argue that auditory heartbeats are well-suited as an empathic intervention, and might be particularly useful for certain groups and use-contexts because of its musical and non-visual nature. This work establishes a baseline for empathic auditory interfaces, and offers a method to evaluate the effects of future designs.
5
ArticuLev: An Integrated Self-Assembly Pipeline for Articulated Multi-Bead Levitation Primitives
Andreas Rene. Fender (ETH, Zurich, Switzerland)Diego Martinez Plasencia (University College of London, London, United Kingdom)Sriram Subramanian (University College London, London, United Kingdom)
Acoustic levitation is gaining popularity as an approach to create physicalized mid-air content by levitating different types of levitation primitives. Such primitives can be independent particles or particles that are physically connected via threads or pieces of cloth to form shapes in mid-air. However, initialization (i.e., placement of such primitives in their mid-air target locations) currently relies on either manual placement or specialized ad-hoc implementations, which limits their practical usage. We present ArticuLev, an integrated pipeline that deals with the identification, assembly and mid-air placement of levitated shape primitives. We designed ArticuLev with the physical properties of commonly used levitation primitives in mind. It enables experiences that seamlessly combine different primitives into meaningful structures (including fully articulated animated shapes) and supports various levitation display approaches (e.g., particles moving at high speed). In this paper, we describe our pipeline and demonstrate it with heterogeneous combinations of levitation primitives.
4
“Put it on the Top, I’ll Read it Later”: Investigating Users’ Desired Display Order for Smartphone Notifications
Tzu-Chieh Lin (National Chiao Tung University, Hsinchu, Taiwan)Yu-Shao Su (National Chiao Tung University, Hsinchu , Taiwan)Emily Helen. Yang (National Chiao Tung University, Hsinchu, Taiwan)Yun Han Chen (National Chiao Tung University, Hsinchu, Taiwan)Hao-Ping Lee (National Chiao Tung University, Hsinchu, Taiwan)Yung-Ju Chang (National Chiao Tung University, Hsinchu, Taiwan)
Smartphone users do not deal with notifications strictly in the order they are displayed, but sometimes read them from the middle, suggesting a mismatch between current systems’ display order and users’ needs. We therefore used mixed methods to investigate 34 smartphone users’ desired notification display order and related it with users’ self-reported order of attendance. Classifying using these two orders as dimensions, we obtained seven types of notifications, which helped us not only highlight the distinct attributes but understand the implied roles of these seven types of notifications, as well as the implied meaning of display orders. This is especially manifested in our identification of three main mismatches between the two orders. Qualitative findings reveal several meanings that participants attached to particular positions when arranging notifications. We offer design implications for notification systems, including calling for two-dimensional notification layout to support the multi-purpose roles of smartphone notifications we identified.
4
Sticky Goals: Understanding Goal Commitments for Behavioral Changes in the Wild
Hyunsoo Lee (KAIST, Daejeon, Korea, Republic of)Auk Kim (Kangwon National Univeristy, Chucheon, Korea, Republic of)Hwajung Hong (Seoul National University, Seoul, Korea, Republic of)Uichin Lee (KAIST, Daejeon, Korea, Republic of)
A commitment device, an attempt to bind oneself for a successful goal achievement, has been used as an effective strategy to promote behavior change. However, little is known about how commitment devices are used in the wild, and what aspects of commitment devices are related to goal achievements. In this paper, we explore a large-scale dataset from stickK, an online behavior change support system that provides both financial and social commitments. We characterize the patterns of behavior change goals (e.g., topics and commitment setting) and then perform a series of multilevel regression analyses on goal achievements. Our results reveal that successful goal achievements are largely dependent on the configuration of financial and social commitment devices, and a mixed commitment setting is considered beneficial. We discuss how our findings could inform the design of effective commitment devices, and how large-scale data can be leveraged to support data-driven goal elicitation and customization.
4
On Designing Programming Error Messages for Novices: Readability and its Constituent Factors
Paul Denny (The University of Auckland, Auckland, New Zealand)James Prather (Abilene Christian University, Abilene, Texas, United States)Brett A. Becker (University College Dublin, Dublin, Ireland)Catherine Mooney (University College Dublin, Dublin, Ireland)John Homer (Abilene Christian University, Abilene, Texas, United States)Zachary C. Albrecht (Abilene Christian University, Abilene, Texas, United States)Garrett B. Powell (Abilene Christian University, Abilene, Texas, United States)
Programming error messages play an important role in learning to program. The cycle of program input and error message response completes a loop between the programmer and the compiler/interpreter and is a fundamental interaction between human and computer. However, error messages are notoriously problematic, especially for novices. Despite numerous guidelines citing the importance of message readability, there is little empirical research dedicated to understanding and assessing it. We report three related experiments investigating factors that influence programming error message readability. In the first two experiments we identify possible factors, and in the third we ask novice programmers to rate messages using scales derived from these factors. We find evidence that several key factors significantly affect message readability: message length, jargon use, sentence structure, and vocabulary. This provides novel empirical support for previously untested long-standing guidelines on message design, and informs future efforts to create readability metrics for programming error messages.
4
Drone in Love: Emotional Perception of Facial Expressions on Flying Robots
Viviane Herdel (Ben Gurion University of the Negev, Be’er Sheva, Israel)Anastasia Kuzminykh (University of Toronto, Toronto, Ontario, Canada)Andrea Hildebrandt (Carl von Ossietzky UniversityOldenburg, Oldenburg, Germany)Jessica R.. Cauchard (Ben Gurion University of the Negev, Be'er Sheva, Israel)
Drones are rapidly populating human spaces, yet little is known about how these flying robots are perceived and understood by humans. Recent works suggested that their acceptance is predicated upon their sociability. This paper explores the use of facial expressions to represent emotions on social drones. We leveraged design practices from ground robotics and created a set of rendered robotic faces that convey basic emotions. We evaluated individuals' response to these emotional facial expressions on drones in two empirical studies (N = 98, N = 98). Our results demonstrate that individuals accurately recognize five drone emotional expressions, as well as make sense of intensities within emotion categories. We describe how participants were emotionally affected by the drone, showed empathy towards it, and created narratives to interpret its emotions. As a consequence, we formulate design recommendations for social drones and discuss methodological insights on the use of static versus dynamic stimuli in affective robotics studies.
4
SoniBand: Understanding the Effects of Metaphorical Movement Sonifications on Body Perception and Physical Activity
Judith Ley-Flores (Universidad Carlos III de Madrid, Leganes, Madrid, Spain)Laia Turmo Vidal (Uppsala University, Uppsala, Sweden)Nadia Berthouze (University College London, London, United Kingdom)Aneesha Singh (University College London, London, United Kingdom)Frederic Bevilacqua (STMS IRCAM-CNRS-Sorbonne Université, Paris, France)Ana Tajadura-Jiménez (Universidad Carlos III de Madrid / University College London, Madrid / London, Spain)
Negative body perceptions are a major predictor of physical inactivity, a serious health concern. Sensory feedback can be used to alter such body perception; movement sonification, in particular, has been suggested to affect body perception and levels of physical activity (PA) in inactive people. We investigated how metaphorical sounds impact body perception and PA. We report two qualitative studies centered on performing different strengthening/flexibility exercises using SoniBand, a wearable that augments movement through different sounds. The first study involved physically active participants and served to obtain a nuanced understanding of the sonifications’ impact. The second, in the home of physically inactive participants, served to identify which effects could support PA adherence. Our findings show that movement sonification based on metaphors led to changes in body perception (e.g., feeling strong) and PA (e.g., repetitions) in both populations, but effects could differ according to the existing PA-level. We discuss principles for metaphor-based sonification design to foster PA.
4
Radi-Eye: Hands-free Radial Interfaces for 3D Interaction using Gaze-activated Head-crossing
Ludwig Sidenmark (Lancaster University, Lancaster, United Kingdom)Dominic Potts (Lancaster University, Lancaster, Lancashire, United Kingdom)Bill Bapisch (Ludwig-Maximilians-Universität, Munich, Germany)Hans Gellersen (Aarhus University, Aarhus, Denmark)
Eye gaze and head movement are attractive for hands-free 3D interaction in head-mounted displays, but existing interfaces afford only limited control. Radi-Eye is a novel pop-up radial interface designed to maximise expressiveness with input from only the eyes and head. Radi-Eye provides widgets for discrete and continuous input and scales to support larger feature sets. Widgets can be selected with Look & Cross, using gaze for pre-selection followed by head-crossing as trigger and for manipulation. The technique leverages natural eye-head coordination where eye and head move at an offset unless explicitly brought into alignment, enabling interaction without risk of unintended input. We explore Radi-Eye in three augmented and virtual reality applications, and evaluate the effect of radial interface scale and orientation on performance with Look & Cross. The results show that Radi-Eye provides users with fast and accurate input while opening up a new design space for hands-free fluid interaction.
4
Remote Learners, Home Makers: How Digital Fabrication Was Taught Online During a Pandemic
Gabrielle Benabdallah (University of Washington, Seattle, Washington, United States)Sam Bourgault (University of California, Santa Barbara, Santa Barbara, California, United States)Nadya Peek (University of Washington, Seattle, Washington, United States)Jennifer Jacobs (University of California Santa Barbara, Santa Barbara, California, United States)
Digital fabrication courses that relied on physical makerspaces were severely disrupted by COVID-19. As universities shut down in Spring 2020, instructors developed new models for digital fabrication at a distance. Through interviews with faculty and students and examination of course materials, we recount the experiences of eight remote digital fabrication courses. We found that learning with hobbyist equipment and online social networks could emulate using industrial equipment in shared workshops. Furthermore, at-home digital fabrication offered unique learning opportunities including more iteration, machine tuning, and maintenance. These opportunities depended on new forms of labor and varied based on student living situations. Our findings have implications for remote and in-person digital fabrication instruction. They indicate how access to tools was important, but not as critical as providing opportunities for iteration; they show how remote fabrication exacerbated student inequities; and they suggest strategies for evaluating trade-offs in remote fabrication models with respect to learning objectives.
4
The Ethics of Multiplayer Game Design and Community Management: Industry Perspectives and Challenges
Lucy A.. Sparrow (The University of Melbourne, Melbourne, VIC, Australia)Martin Gibbs (The University of Melbourne, Melbourne, Victoria, Australia)Michael Arnold (The University of Melbourne, Melbourne, VIC, Australia)
Game industry professionals are frequently implementing new methods of addressing ethical issues related to in-game toxicity and disruptive player behaviours associated with online multiplayer games. However, academic work on these behaviours tends to focus on the perspectives of players rather than the industry. To fully understand the ethics of multiplayer games and promote ethical design, we must examine the challenges facing those designing multiplayer games through an ethical lens. To this end, this paper presents a reflexive thematic analysis of 21 in-depth interviews with games industry professionals on their ethical views and experiences in game design and community management. We identify a number of tensions involved in making ethics-related design decisions for divided player communities alongside current game design practices that are concerned with functionality, revenue and entertainment. We then put forward a set of design considerations for integrating ethics into multiplayer game design.
4
Increasing Electrical Muscle Stimulation’s Dexterity by means of Back of the Hand Actuation
Akifumi Takahashi (University of Chicago, Chicago, Illinois, United States)Jas Brooks (University of Chicago, Chicago, Illinois, United States)Hiroyuki Kajimoto (The University of Electro-Communications, Chofu, Tokyo, Japan)Pedro Lopes (University of Chicago, Chicago, Illinois, United States)
We propose a technique that allows an unprecedented level of dexterity in electrical muscle stimulation (EMS), i.e., it allows interactive EMS-based devices to flex the user’s fingers independently of each other. EMS is a promising technique for force feedback because of its small form factor when compared to mechanical actuators. However, the current EMS approach to flexing the user’s fingers (i.e., attaching electrodes to the base of the forearm, where finger muscles anchor) is limited by its inability to flex a target finger’s metacarpophalangeal (MCP) joint independently of the other fingers. In other words, current EMS devices cannot flex one finger alone, they always induce unwanted actuation to adjacent fingers. To tackle the lack of dexterity, we propose and validate a new electrode layout that places the electrodes on the back of the hand, where they stimulate the interossei/lumbricals muscles in the palm, which have never received attention with regards to EMS. In our user study, we found that our technique offers four key benefits when compared to existing EMS electrode layouts: our technique (1) flexes all four fingers around the MCP joint more independently; (2) has less unwanted flexion of other joints (such as the proximal interphalangeal joint); (3) is more robust to wrist rotations; and (4) reduces calibration time. Therefore, our EMS technique enables applications for interactive EMS systems that require a level of flexion dexterity not available until now. We demonstrate the improved dexterity with four example applications: three musical instrumental tutorials (piano, drum, and guitar) and a VR application that renders force feedback in individual fingers while manipulating a yo-yo.