注目の論文一覧

各カテゴリ上位30論文までを表示しています

The ACM CHI Conference on Human Factors in Computing Systems (https://chi2020.acm.org/)

3
Classification of Functional Attention in Video Meetings
Anastasia Kuzminykh (University of Waterloo, Waterloo, ON, Canada)Sean Rintel (Microsoft Research, Cambridge, United Kingdom)
Participants in video meetings have long struggled with asymmetrical attention levels, especially when participants are distributed unevenly. While technological advances offer exciting opportunities to augment remote users' attention, the phenomenological complexity of attention means that to design attention-fostering features we must first understand what aspects of it are functionally meaningful to support. In this paper, we present a functional classification of observable attention for video meetings. The classification was informed by two studies on sense-making and selectiveness of attention in work meetings. It includes categories of attention accessible for technological support, their functions in a meeting process, and meeting-related activities that correspond to these functions. This classification serves as a multi-level representation of attention and informs the design of features aiming to support remote participants' attention in video meetings.
3
When Design Novices and LEGO^(®) Meet: Stimulating Creative Thinking for Interface Design
Simon Bourdeau (ESG-UQAM, Montréal, PQ, Canada)Annemarie Lesage (HEC Montréal, Montréal, PQ, Canada)Béatrice Caron (HEC Montréal, Montréal, PQ, Canada)Pierre-Majorique Léger (HEC Montréal, Montréal, PQ, Canada)
Design thinking is an iterative, human-centered approach to innovation. Its success rests on collaboration within a multidisciplinary project team going through cycles of divergent and convergent ideations. In these teams, nondesigners risk diminishing the divergent reach because they are generally reluctant to sketch, thus missing out on theambiguous, imprecise early conceptual divergent phases. We hypothesized that LEGO^(®) could advantageously be a substitute to sketching. In this comparative study, 44 nondesigners randomly paired in 22 dyads did two conceptual ideations of healthcare landing pages, one using pen/paper (spontaneously writing words on sticky notes) and the other using LEGO, assessed through Torrance and Guilford frameworks for divergent thinking. Results show that LEGO interfaces gathered significantly higher divergent thinking scores because their concepts were significantly more elaborated. Furthermore, when using LEGO, teams who generated more elements were likely to also generate more ideas, more categories of ideas and more original ideas.
3
Live Sketchnoting Across Platforms: Exploring the Potential and Limitations of Analogue and Digital Tools
Marina Fernández Camporro (University College London, London, United Kingdom)Nicolai Marquardt (University College London, London, United Kingdom)
Sketchnoting is the process of creating a visual record with combined text and imagery of an event or presentation. Although analogue tools are still the most common method for sketchnoting, the use of digital tools is increasing. We conducted a study to better understand the current practices, techniques, compromises and opportunities of creating both pen&paper and digital sketchnotes. Our research combines insights from semi-structured interviews with the findings from a within-subjects observational study where ten participants created real time sketchnotes of two video presentations on both paper and digital tablet. We report our key findings, categorised into six themes: insights into sense of space; trade-offs with flexibility; choice paradox and cognitive load; matters of perception, accuracy and texture; issues around confidence; and practicalities. We discuss those findings, the potential and limitations of different methods, and implications for the design of future digital sketchnoting tools.
3
AirTouch: 3D-printed Touch-Sensitive Objects Using Pneumatic Sensing
Carlos E. Tejada (University of Copenhagen, Copenhagen, Denmark)Raf Ramakers (Hasselt University, Hasselt, Belgium)Sebastian Boring (Aalborg University, Copenhagen, Denmark)Daniel Ashbrook (University of Copenhagen, Copenhagen, Denmark)
3D printing technology can be used to rapidly prototype the look and feel of 3D objects. However, the objects produced are passive. There has been increasing interest in making these objects interactive, yet they often require assembling components or complex calibration. In this paper, we contribute AirTouch, a technique that enables designers to fabricate touch-sensitive objects with minimal assembly and calibration using pneumatic sensing. AirTouch-enabled objects are 3D printed as a single structure using a consumer-level 3D printer. AirTouch uses pre-trained machine learning models to identify interactions with fabricated objects, meaning that there is no calibration required once the object has completed printing. We evaluate our technique using fabricated objects with various geometries and touch sensitive locations, obtaining accuracies of at least 90% with 12 interactive locations.
3
Defining Haptic Experience: Foundations for Understanding, Communicating, and Evaluating HX
Erin Kim (University of Waterloo, Waterloo, ON, Canada)Oliver Schneider (University of Waterloo, Waterloo, ON, Canada)
Haptic technology is maturing, with expectations and evidence that it will contribute to user experience (UX). However, we have very little understanding about how haptic technology can influence people's experience. Researchers and designers need a way to understand, communicate, and evaluate haptic technology's effect on UX. From a literature review and two studies – one with haptics novices, the other with expert hapticians – we developed a theoretical model of the factors that constitute a good haptic experience (HX). We define HX and propose its constituent factors: design parameters of Timeliness, Density, Intensity, and Timbre; the cross-cutting concern of Personalization; usability requirements of Utility, Causality, Consistency, and Saliency; and experiential factors of Harmony, Expressivity, Autotelics, Immersion, and Realism as guiding constructs important for haptic experience. This model will help guide design and research of haptic systems, inform language around haptics, and provide the basis for evaluative instruments, such as checklists, heuristics, or questionnaires.
3
Evaluating a Personalizable, Inconspicuous Vibrotactile(PIV) Breathing Pacer for In-the-Moment Affect Regulation
Pardis Miri (Stanford University, Palo Alto, CA, USA)Emily Jusuf (Stanford University, Palo Alto, CA, USA)Andero Uusberg (Stanford University, Stanford, CA, USA)Horia Margarit (Stanford University, Palo Alto, CA, USA)Robert Flory (Intel, Hillsboro, OR, USA)Katherine Isbister (University of California, Santa Cruz, Santa Cruz, CA, USA)Keith Marzullo (University of Maryland, College Park, MD, USA)James J. Gross (Stanford University, Stanford, CA, USA)
Given the prevalence and adverse impact of anxiety, there is considerable interest in using technology to regulate anxiety. Evaluating the efficacy of such technology in terms of both the average effect (the intervention efficacy) and the heterogeneous effect (for whom and in what context the intervention was effective) is of paramount importance. In this paper, we demonstrate the efficacy of PIV, a personalized breathing pacer, in reducing anxiety in the presence of a cognitive stressor. We also quantify the relation between our specific stressor and PIV-user engagement. To our knowledge, this is the first mixed-design study of a vibrotactile affect regulation technology which accounts for a specific stressor and for individual differences in relation to the technology's efficacy. Guidelines in this paper can be applied for designing and evaluating other affect regulation technologies.
3
Trigeminal-based Temperature Illusions
Jas Brooks (University of Chicago, Chicago, IL, USA)Steven Nagels (University of Chicago, Chicago, IL, USA)Pedro Lopes (University of Chicago, Chicago, IL, USA)
We explore a temperature illusion that uses low-powered electronics and enables the miniaturization of simple warm and cool sensations. Our illusion relies on the properties of certain scents, such as the coolness of mint or hotness of peppers. These odors trigger not only the olfactory bulb, but also the nose's trigeminal nerve, which has receptors that respond to both temperature and chemicals. To exploit this, we engineered a wearable device based on micropumps and an atomizer that emits up to three custom-made "thermal" scents directly to the user's nose. Breathing in these scents causes the user to feel warmer or cooler. We demonstrate how our device renders warmth and cooling sensations in virtual experiences. In our first study, we evaluated six candidate "thermal" scents. We found two hot-cold pairs, with one pair being less identifiable by odor. In our second study, pParticipants rated VR experiences with our device trigeminal stimulants as significantly warmer or cooler than the baseline conditions. Lastly, we believe this offers an alternative to existing thermal feedback devices, which unfortunately rely on power-hungry heat-lamps or Peltier-elements.
3
UI Dark Patterns and Where to Find Them: A Study on Mobile Applications and User Perception
Linda Di Geronimo (University of Zürich, Zürich, Switzerland)Larissa Braz (University of Zürich, Zürich, Switzerland)Enrico Fregnan (University of Zürich, Zürich, Switzerland)Fabio Palomba (University of Zürich, Zürich, Switzerland)Alberto Bacchelli (University of Zürich, Zürich, Switzerland)
A Dark Pattern (DP) is an interface maliciously crafted to deceive users into performing actions they did not mean to do. In this work, we analyze Dark Patterns in 240 popular mobile apps and conduct an online experiment with 589 users on how they perceive Dark Patterns in such apps. The results of the analysis show that 95% of the analyzed apps contain one or more forms of Dark Patterns and, on average, popular applications include at least seven different types of deceiving interfaces. The online experiment shows that most users do not recognize Dark Patterns, but can perform better in recognizing malicious designs if informed on the issue. We discuss the impact of our work and what measures could be applied to alleviate the issue.
3
Exploring the Potential of an Intelligent Tutoring System for Sketching Fundamentals
Blake Williford (Texas A&M University, College Station, TX, USA)Matthew Runyon (Texas A&M University, College Station, TX, USA)Wayne Li (Georgia Institute of Technology, Atlanta, GA, USA)Julie Linsey (Georgia Institute of Technology, Atlanta, GA, USA)Tracy Hammond (Texas A&M University, College Station, TX, USA)
Sketching is a practical and useful skill that can benefit communication and problem solving. However, it remains a difficult skill to learn because of low confidence and motivation among students and limited availability for instruction and personalized feedback among teachers. There is an need to improve the educational experience for both groups, and we hypothesized that integrating technology could provide a variety of benefits. We designed and developed an intelligent tutoring system for sketching fundamentals called Sketchtivity, and deployed it in to six existing courses at the high school and university level during the 2017-2018 school year. 268 students used the tool and produced more than 116,000 sketches of basic primitives. We conducted semi-structured interviews with the six teachers who implemented the software, as well as nine students from a course where the tool was used extensively. Using grounded theory, we found ten categories which unveiled the benefits and limitations of integrating an intelligent tutoring system for sketching fundamentals in to existing pedagogy.
2
Music Creation by Example
Emma Frid (KTH Royal Institute of Technology, Stockholm, Sweden)Celso Gomes (Adobe Research, Seattle, WA, USA)Zeyu Jin (Adobe Research, San Francisco, CA, USA)
Short online videos have become the dominating media on social platforms. However, finding suitable music to accompany videos can be a challenging task to some video creators, due to copyright constraints, limitations in search engines, and required audio-editing expertise. One possible solution to these problems is to use AI music generation. In this paper we present a user interface (UI) paradigm that allows users to input a song to an AI engine and then interactively regenerate and mix AI-generated music. To arrive at this design, we conducted user studies with a total of 104 video creators at several stages of our design and development process. User studies supported the effectiveness of our approach and provided valuable insights about human-AI interaction as well as the design and evaluation of mixed-initiative interfaces in creative practice.
2
Household Surface Interactions: Understanding User Input Preferences and Perceived Home Experiences
Garreth W. Tigwell (Rochester Institute of Technology, Rochester, NY, USA)Michael Crabb (University of Dundee, Dundee, United Kingdom)
Households contain a variety of surfaces that are used in a number of activity contexts. As ambient technology becomes commonplace in our homes, it is only a matter of time before these surfaces become linked to computer systems for Household Surface Interaction (HSI). However, little is known about the user experience attached to HSI, and the potential acceptance of HSI within modern homes. To address this problem, we ran a mixed methods user study with 39 participants to examine HSI using nine household surfaces and five common gestures (tap, press, swipe, drag, and pinch). We found that under the right conditions, surfaces with some amount of texture can enhance HSI. Furthermore, perceived good and poor user experience varied among participants for surface type indicating individual preferences. We present findings and design considerations based on surface characteristics and the challenges that users perceive they may have with HSI within their homes.
2
The Curious Case of the Transdiegetic Cow, or a Mission to Foster Other-Oriented Empathy Through Virtual Reality
Martijn J.L. Kors (Eindhoven University of Technology & Amsterdam University of Applied Science, Eindhoven, Noord Brabant, Netherlands)Erik D. van der Spek (Eindhoven University of Technology, Eindhoven, Noord Brabant, Netherlands)Julia A. Bopp (University of Basel, Basel, Switzerland)Karel Millenaar (Amsterdam University of Applied Science, Amsterdam, Netherlands)Rutger L. van Teutem (Amsterdam University of Applied Sciences, Amsterdam, Netherlands)Gabriele Ferri (Amsterdam University of Applied Sciences, Amsterdam, Netherlands)Ben A.M. Schouten (Eindhoven University of Technology & Amsterdam University of Applied Sciences, Eindhoven & Amsterdam, Netherlands)
Socially aware persuasive games that use immersive technologies often appeal to empathy, prompting users to feel and understand the struggles of another. However, the often sought-after 'standing in another's shoes' experience, in which users virtually inhabit another in distress, may complicate other-oriented empathy. Following a Research through Design approach, we designed for other-oriented empathy – focusing on a partaker-perspective and diegetic reflection – which resulted in Permanent; a virtual reality game designed to foster empathy towards evacuees from the 2011 Fukushima Daiichi nuclear disaster. We deployed Permanent 'in the wild' and carried out a qualitative study with 78 participants in the Netherlands and Japan to capture user experiences. Content Analysis of the data showed a predominance of other-oriented empathy across countries, and in our Thematic Analysis, we identified the themes of 'Spatial, Other, and Self -Awareness', 'Personal Accounts', 'Ambivalence', and 'Transdiegetic Items', resulting in design insights for fostering other-oriented empathy through virtual reality.
2
Kirigami Haptic Swatches: Design Methods for Cut-and-Fold Haptic Feedback Mechanisms
Zekun Chang (University of Tokyo, Tokyo, Japan)Tung D. Ta (University of Tokyo, Tokyo, Japan)Koya Narumi (University of Tokyo, Tokyo, Japan)Heeju Kim (University of Tokyo, Tokyo, Japan)Fuminori Okuya (University of Tokyo, Hongo, Japan)Dongchi Li (University of Tokyo, Tokyo, Japan)Kunihiro Kato (University of Tokyo, Tokyo, Japan)Jie Qi (University of Tokyo, Tokyo, Japan)Yoshinobu Miyamoto (Aichi Institute of Technology, Toyota, Japan)Kazuya Saito (Kyushu University, Minami-Ku, Japan)Yoshihiro Kawahara (University of Tokyo, Tokyo, Japan)
Kirigami Haptic Swatches demonstrate how kirigami and origami based structures enable sophisticated haptic feedback through simple cut-and-fold fabrication techniques. We leverage four types of geometric patterns: rotational erection system (RES), split-fold waterbomb (SFWB), the overlaid structure of SFWB and RES (SFWB+RES), and cylindrical origami, to render different sets of haptic feedback (i.e. linear, bistable, bouncing snap-through, and rotational force behaviors, respectively). In each structure, not only the form factor but also the force feedback properties can be tuned through geometric parameters. We experimentally analyzed and modeled the structures, and implemented software to automatically generate 2D patterns for desired haptic properties. We also demonstrate five example applications including an assistive custom keyboard, rotational switch, multi-sensory toy, task checklist, and phone accessories. We believe the Kirigami Haptic Swatches helps tinkerers, designers, and even researchers to create interactions that enrich our haptic experience.
2
Embedding a VR Game Studio in a Sedentary Workplace: Use, Experience and Exercise Benefits
Soojeong Yoo (University of Sydney, Sydney, NSW, Australia)Phillip Gough (University of Sydney, Sydney, NSW, Australia)Judy Kay (University of Sydney, Sydney, NSW, Australia)
Many people, especially those in sedentary occupations, fail to achieve the recommended levels of physical activity. Virtual reality (VR) games have the potential to overcome this because they are fun and also can be physically demanding. This paper explores whether a VR game studio can help workers in sedentary jobs to get valuable levels of exercise. We studied how 11 participants used our VR game studio in a sedentary workplace over 8-weeks and their perceptions of the experience. We analysed the physical exertion in the VR game studio, comparing this to their step counts from a smartwatch. All participants achieved valuable levels of physical activity and mood benefits. Importantly, for 6 participants, only with the VR game studio did they meet recommended activity levels. Our key contributions are insights about the use of a workplace VR game studio and its health benefits.
2
Body-Penetrating Tactile Phantom Sensations
Jinsoo Kim (Pohang University of Science and Technology, Pohang, Republic of Korea)Seungjae Oh (Pohang University of Science and Technology, Pohang, Republic of Korea)Chaeyong Park (Pohang University of Science and Technology, Pohang, Republic of Korea)Seungmoon Choi (Pohang University of Science and Technology, Pohang, Republic of Korea)
In tactile interaction, a phantom sensation refers to an illusion felt on the skin between two distant points at which vibrations are applied. It can improve the perceptual spatial resolution of tactile stimulation with a few tactors. All phantom sensations reported in the literature act on the skin or out of the body, but no such reports exist for those eliciting sensations penetrating the body. This paper addresses tactile phantom sensations in which two vibration actuators on the dorsal and palmar sides of the hand present an illusion of vibration passing through the hand. We also demonstrate similar tactile illusions for the torso. For optimal design, we conducted user studies while varying vibration frequency, envelope function, stimulus duration, and penetrating direction. Based on the results, we present design guidelines on penetrating phantom sensations for its use in immersive virtual reality applications.
2
E-Textile Microinteractions: Augmenting Twist with Flick, Slide and Grasp Gestures for Soft Electronics
Alex Olwal (Google Research, Mountain View, CA, USA)Thad Starner (Google Research, Mountain View, CA, USA)Gowa Mainini (Google Research, Mountain View, CA, USA)
E-textile microinteractions advance cord-based interfaces by enabling the simultaneous use of precise continuous control and casual discrete gestures. We leverage the recently introduced I/O Braid sensing architecture to enable a series of user studies and experiments which help design suitable interactions and a real-time gesture recognition pipeline. Informed by a gesture elicitation study with 36 participants, we developed a user-dependent classifier for eight discrete gestures with 94% accuracy for 12 participants. In a formal evaluation we show that we can enable precise manipulation with the same architecture. Our quantitative targeting experiment suggests that twisting is faster than existing headphone button controls and is comparable in speed to a capacitive touch surface. Qualitative interview feedback indicates a preference for I/O Braid's interaction over that of in-line headphone controls. Our applications demonstrate how continuous and discrete gestures can be combined to form new, integrated e-textile microinteraction techniques for real-time continuous control, discrete actions and mode switching.
2
Enabling Data-Driven API Design with Community Usage Data: A Need-Finding Study
Tianyi Zhang (Harvard University, Cambridge, MA, USA)Björn Hartmann (University of California, Berkeley, Berkeley, CA, USA)Miryung Kim (University of California, Los Angeles, Los Angeles, CA, USA)Elena L. Glassman (Harvard University, Cambridge, MA, USA)
APIs are becoming the fundamental building block of modern software and their usability is crucial to programming efficiency and software quality. Yet API designers find it hard to gather and interpret user feedback on their APIs. To close the gap, we interviewed 23 API designers from 6 companies and 11 open-source projects to understand their practices and needs. The primary way of gathering user feedback is through bug reports and peer reviews, as formal usability testing is prohibitively expensive to conduct in practice. Participants expressed a strong desire to gather real-world use cases and understand users' mental models, but there was a lack of tool support for such needs. In particular, participants were curious about where users got stuck, their workarounds, common mistakes, and unanticipated corner cases. We highlight several opportunities to address those unmet needs, including developing new mechanisms that systematically elicit users' mental models, building mining frameworks that identify recurring patterns beyond shallow statistics about API usage, and exploring alternative design choices made in similar libraries.
2
A Skin-Stroke Display on the Eye-Ring Through Head-Mounted Displays
Wen-Jie Tseng (National Chiao Tung University & Institut Polytechnique de Paris, Hsinchu, Taiwan Roc)Yi-Chen Lee (Institute of Multimedia Engineering, Hsinchu, Taiwan Roc)Roshan L. Peiris (Rochester Institute of Technology, Rochester, NY, USA)Liwei Chan (Computer Science, Hsinchu, Taiwan Roc)
We present the Skin-Stroke Display, a system mounted on the lens inside the head-mounted display, which exerts subtle yet recognizable tactile feedback on the eye-ring using a motorized air jet. To inform our design of noticeable air-jet haptic feedback, we conducted a user study to identify absolute detection thresholds. Our results show that the tactile sensation had different sensitivity around the eyes, and we determined a standard intensity (8 mbar) to prevent turbulent airflow blowing into the eyes. In the second study, we asked participants to adjust the intensity around the eye for equal sensation based on standard intensity. Next, we investigated the recognition of point and stroke stimuli with or without inducing cognitive load on eight directions on the eye-ring. Our longStroke stimulus can achieve an accuracy of 82.6% without cognitive load and 80.6% with cognitive load simulated by the Stroop test. Finally, we demonstrate example applications using the skin-stroke display as the off-screen indicator, tactile I/O progress display, and tactile display.
2
Evaluation of Machine Learning Techniques for Hand Pose Estimation on Handheld Device with Proximity Sensor
Kazuyuki Arimatsu (Sony Interactive Entertainment Inc., Tokyo, Japan)Hideki Mori (Sony Interactive Entertainment Inc., Minato-ku, Japan)
Tracking finger movement for natural interaction using hand is commonly studied. For vision-based implementations of finger tracking in virtual reality (VR) application, finger movement is occluded by a handheld device which is necessary for auxiliary input, thus tracking finger movement using cameras is still challenging. Finger tracking controllers using capacitive proximity sensors on the surface are starting to appear. However, research on estimating articulated hand pose from curved capacitance sensing electrodes is still immature. Therefore, we built a prototype with 62 electrodes and recorded training datasets using an optical tracking system. We have introduced 2.5D representation to apply convolutional neural network methods on a capacitive image of the curved surface, and two types of network architectures based on recent achievements in the computer vision field were evaluated with our dataset. We also implemented real-time interactive applications using the prototype and demonstrated the possibility of intuitive interaction using fingers in VR applications.
2
Gaiters: Exploring Skin Stretch Feedback on Legs for Enhancing Virtual Reality Experiences
Chi Wang (National Taiwan University & National Chiao Tung University, Taipei & Hsinchu, Taiwan Roc)Da-Yuan Huang (National Chiao Tung University, Hsinchu, Taiwan Roc)Shuo-Wen Hsu (National Chiao Tung University, Hsinchu, Taiwan Roc)Cheng-Lung Lin (National Chiao Tung University, Hsinchu, Taiwan Roc)Yeu-Luen Chiu (National Chiao Tung University, Hsinchu, Taiwan Roc)Chu-En Hou (National Chiao Tung University, Hsinchu, Taiwan Roc)Bing-Yu Chen (National Taiwan University, Taipei, Taiwan Roc)
We propose generating two-dimensional skin stretch feedback on the user's legs. Skin stretch is useful cutaneous feedback to induce the perception of virtual textures and illusory forces and to deliver directional cues. This feedback has been applied to the head, body, and upper limbs to simulate rich physical properties in virtual reality (VR). However, how to expand the benefit of skin stretch feedback and apply it to the lower limbs, remains to be explored. Our first two psychophysical studies examined the minimum changes in skin stretch distance and stretch angle that are perceivable by participants. We then designed and implemented Gaiters, a pair of ungrounded, leg-worn devices, each of which is able to generate multiple two-dimensional skin stretches on the skin of the user's leg. With Gaiters, we conducted an exploratory study to understand participants' experiences when coupling skin stretch patterns with various lower limb actions. The results indicate that rich haptic experiences can be created by our prototype. Finally, a user evaluation indicates that participants enjoyed the experiences when using Gaiters and considered skin stretch as compelling haptic feedback on the legs.
2
Designing IoT Resources to Support Outdoor Play for Children
Thomas Dylan (Northumbria University, Newcastle upon Tyne, United Kingdom)Gavin Wood (Northumbria University, Newcastle upon Tyne, United Kingdom)Abigail C. Durrant (Northumbria University, Newcastle upon Tyne, United Kingdom)John Vines (Northumbria University, Newcastle upon Tyne, United Kingdom)Pablo E. Torres (University College London, London, United Kingdom)Philip I. N. Ulrich (Canterbury Christ Church University, Canterbury, United Kingdom)Mutlu Cukurova (University College London, London, United Kingdom)Amanda Carr (Canterbury Christ Church University, Canterbury, United Kingdom)Sena Çerçi (Northumbria University, Newcastle Upon Tyne, United Kingdom)Shaun Lawson (Northumbria University, Newcastle upon Tyne, United Kingdom)
We describe a Research-through-Design (RtD) project that explores the Internet of Things (IoT) as a resource for children's free play outdoors. Based on initial insights from a design ethnography, we developed four RtD prototypes for social play in different scenarios of use outdoors, including congregating on a street or in a park to play physical games with IoT. We observed these prototypes in use by children in their free play in two community settings, and report on the qualitative analysis of our fieldwork. Our findings highlight the designs' material qualities that encouraged social and physical play under certain conditions, suggesting social affordances that are central to the success of IoT designs for free play outdoors. We provide directions for future research that addresses the challenges faced when deploying IoT with children, contributing new considerations for interaction design with children in outdoor settings and free play contexts.
2
Debugging Database Queries: A Survey of Tools, Techniques, and Users
Sneha Gathani (University of Maryland, College Park, College Park, MD, USA)Peter Lim (University of Maryland, College Park, College Park, MD, USA)Leilani Battle (University of Maryland, College Park, College Park, MD, USA)
Database management systems (or DBMSs) have been around for decades, and yet are still difficult to use, particularly when trying to identify and fix errors in user programs (or queries). We seek to understand what methods have been proposed to help people debug database queries, and whether these techniques have ultimately been adopted by DBMSs (and users). We conducted an interdisciplinary review of 112 papers and tools from the database, visualisation and HCI communities. To better understand whether academic and industry approaches are meeting the needs of users, we interviewed 20 database users (and some designers), and found surprising results. In particular, there seems to be a wide gulf between users' debugging strategies and the functionality implemented in existing DBMSs, as well as proposed in the literature. In response, we propose new design guidelines to help system designers to build features that more closely match users debugging strategies.
2
Race Yourselves: A Longitudinal Exploration of Self-Competition Between Past, Present, and Future Performances in a VR Exergame
Alexander Michael (University of Bath, Bath, United Kingdom)Christof Lutteroth (University of Bath, Bath, United Kingdom)
Participating in competitive races can be a thrilling experience for athletes, involving a rush of excitement and sensations of flow, achievement, and self-fulfilment. However, for non-athletes, the prospect of competition is often a scary one which affects intrinsic motivation negatively, especially for less fit, less competitive individuals. We propose a novel method making the positive racing experience accessible to non-athletes using a high-intensity cycling VR exergame: by recording and replaying all their previous gameplay sessions simultaneously, including a projected future performance, players can race against a crowd of "ghost" avatars representing their individual fitness journey. The experience stays relevant and exciting as every race adds a new competitor. A longitudinal study over four weeks and a cross-sectional study found that the new method improves physical performance, intrinsic motivation, and flow compared to a non-competitive exergame. Additionally, the longitudinal study provides insights into the longer-term effects of VR exergames.
2
FoodFab: Creating Food Perception Illusions using Food 3D Printing
Ying-Ju Lin (Osaka University, Toyonaka, Osaka, Japan)Parinya Punpongsanon (Osaka University & Massachusetts Institute of Technology, Toyonaka, Osaka, Japan)Xin Wen (Massachusetts Institute of Technology, Cambridge, MA, USA)Daisuke Iwai (Osaka University, Toyonaka, Osaka, Japan)Kosuke Sato (Osaka University, Toyonaka, Osaka, Japan)Marianna Obrist (Massachusetts Institute of Technology & University of Sussex, Brighton, United Kingdom)Stefanie Mueller (Massachusetts Institute of Technology, Cambridge, MA, USA)
Personalization of eating such that everyone consumes only what they need allows improving our management of food waste. In this paper, we explore the use of food 3D printing to create perceptual illusions for controlling the level of perceived satiety given a defined amount of calories. We present FoodFab, a system that allows users to control their food intake through modifying a food's internal structure via two 3D printing parameters: infill pattern and infill density. In two experiments with a total of 30 participants, we studied the effect of these parameters on users' chewing time that is known to affect people's feeling of satiety. Our results show that we can indeed modify the chewing time by varying infill pattern and density, and thus control perceived satiety. Based on the results, we propose two computational models and integrate them into a user interface that simplifies the creation of personalized food structures.
2
Investigating Collaborative Exploration of Design Alternatives on a Wall-Sized Display
Yujiro Okuya (Université Paris-Saclay, CNRS, LIMSI VENISE team & Université Paris-Saclay, CNRS, Inria, LRI, Orsay, France)Olivier Gladin (Université Paris-Saclay, CNRS, Inria, LRI, Orsay, France)Nicolas Ladevèze (Université Paris-Saclay, CNRS, LIMSI, VENISE team, Orsay, France)Cédric Fleury (Université Paris-Saclay, CNRS, Inria, LRI, Orsay, France)Patrick Bourdot (Université Paris-Saclay, CNRS, LIMSI VENISE team, Orsay, France)
Industrial design review is an iterative process which mainly relies on two steps involving many stakeholders: design discussion and CAD data adjustment. We investigate how a wall-sized display could be used to merge these two steps by allowing multidisciplinary collaborators to simultaneously generate and explore design alternatives. We designed ShapeCompare based on the feedback from a usability study. It enables multiple users to compute and distribute CAD data with touch interaction. To assess the benefit of the wall-sized display in such context, we ran a controlled experiment which aims to compare ShapeCompare with a visualization technique suitable for standard screens. The results show that pairs of participants performed a constraint solving task faster and used more deictic instructions with ShapeCompare. From these findings, we draw generic recommendations for collaborative exploration of alternatives.
2
Exploring The Future of Data-Driven Product Design
Katerina Gorkovenko (University of Edinburgh, Edinburgh, United Kingdom)Daniel J. Burnett (Lancaster University, Lancaster, Lancashire, United Kingdom)James K. Thorp (Lancaster University, Lancaster, Lancashire, United Kingdom)Daniel Richards (Lancaster University, Lancaster, Lancashire, United Kingdom)Dave Murray-Rust (University of Edinburgh, Edinburgh, United Kingdom)
Connected devices present new opportunities to advance design through data collection in the wild, similar to the way digital services evolve through analytics. However, it is still unclear how live data transmitted by connected devices informs the design of these products, going beyond performance optimisation to support creative practices. Design can be enriched by data captured by connected devices, from usage logs to environmental sensors, and data about the devices and people around them. Through a series of workshops, this paper contributes industry and academia perspectives on the future of data-driven product design. We highlight HCI challenges, issues and implications, including sensemaking and the generation of design insight. We further challenge current notions of data-driven design and envision ways in which future HCI research can develop ways to work with data in the design process in a connected, rich, human manner.
2
Virtual Reality Games for People Using Wheelchairs
Kathrin Gerling (Katholieke Universiteit Leuven, Leuven, Belgium)Patrick Dickinson (University of Lincoln, Lincoln, United Kingdom)Kieran Hicks (University of Lincoln, Lincoln, United Kingdom)Liam Mason (University of Lincoln, Lincoln, United Kingdom)Adalberto L. Simeone (Katholieke Universiteit Leuven, Leuven, Belgium)Katta Spiel (Katholieke Universiteit Leuven, Leuven, Belgium)
Virtual Reality (VR) holds the promise of providing engaging embodied experiences, but little is known about how people with disabilities engage with it. We explore challenges and opportunities of VR gaming for wheelchair users. First, we present findings from a survey that received 25 responses and gives insights into wheelchair users' motives to (non-) engage with VR and their experiences. Drawing from this survey, we derive design implications which we tested through implementation and qualitative evaluation of three full-body VR game prototypes with 18 participants. Our results show that VR gaming engages wheelchair users, though nuanced consideration is required for the design of embodied immersive experiences for minority bodies, and we illustrate how designers can create meaningful, positive experiences.
2
ScrAPIr: Making Web Data APIs Accessible to End Users
Tarfah Alrashed (Massachusetts Institute of Technology, Cambridge, MA, USA)Jumana Almahmoud (Massachusetts Institute of Technology, Cambridge, MA, USA)Amy X. Zhang (Massachusetts Institute of Technology, Cambridge, MA, USA)David R. Karger (Massachusetts Institute of Technology, Cambridge, MA, USA)
Users have long struggled to extract and repurpose data from websites by laboriously copying or scraping content from web pages. An alternative is to write scripts that pull data through APIs. This provides a cleaner way to access data than scraping; however, APIs are effortful for programmers and nigh-impossible for non-programmers to use. In this work, we empower users to access APIs without programming. We evolve a schema for declaratively specifying how to interact with a data API. We then develop ScrAPIr: a standard query GUI that enables users to fetch data through any API for which a specification exists, and a second GUI that lets users author and share the specification for a given API. From a lab evaluation, we find that even non-programmers can access APIs using ScrAPIr, while programmers can access APIs 3.8 times faster on average using ScrAPIr than using programming.
2
Too Hot to Handle: An Evaluation of the Effect of Thermal Visual Representation on User Grasping Interaction in Virtual Reality
Andreea Dalia Blaga (Birmingham City University, Birmingham, United Kingdom)Maite Frutos-Pascual (Birmingham City University, Birmingham, United Kingdom)Chris Creed (Birmingham City University, Birmingham, United Kingdom)Ian Williams (Birmingham City University, Birmingham West Midlands, United Kingdom)
Influence of interaction fidelity and rendering quality on perceived user experience have been largely explored in Virtual Reality (VR). However, differences in interaction choices triggered by these rendering cues have not yet been explored. We present a study analysing the effect of thermal visual cues and contextual information on 50 participants' approach to grasp and move a virtual mug. This study comprises 3 different temperature cues (baseline empty, hot and cold) and 4 contextual representations; all embedded in a VR scenario. We evaluate 2 different hand representations (abstract and human) to assess grasp metrics. Results show temperature cues influenced grasp location, with the mug handle being predominantly grasped with a smaller grasp aperture for the hot condition, while the body and top were preferred for baseline and cold conditions.
2
Get a Grip: Evaluating Grip Gestures for VR Input using a Lightweight Pen
Nianlong Li (Institute of Software, Chinese Academy of Sciences & University of Chinese Academy of Sciences, Beijing, China)Teng Han (Institute of Software, Chinese Academy of Sciences & University of Chinese Academy of Sciences, Beijing, China)Feng Tian (Institute of software, Chinese Academy of Sciences & University of Chinese Academy of Sciences, Beijing, China)Jin Huang (Institute of Software, Chinese Academy of Sciences & University of Chinese Academy of Sciences, Beijing, China)Minghui Sun (Jilin University, Changchun, China)Pourang Irani (University of Manitoba, Winnipeg, Canada)Jason Alexander (Lancaster University, Lancaster, Lancashire, United Kingdom)
The use of Virtual Reality (VR) in applications such as data analysis, artistic creation, and clinical settings requires high precision input. However, the current design of handheld controllers, where wrist rotation is the primary input approach, does not exploit the human fingers' capability for dexterous movements for high precision pointing and selection. To address this issue, we investigated the characteristics and potential of using a pen as a VR input device. We conducted two studies. The first examined which pen grip allowed the largest range of motion---we found a tripod grip at the rear end of the shaft met this criterion. The second study investigated target selection via 'poking' and ray-casting, where we found the pen grip outperformed the traditional wrist-based input in both cases. Finally, we demonstrate potential applications enabled by VR pen input and grip postures.
2
A Literature Review of Quantitative Persona Creation
Joni Salminen (Qatar Computing Research Institute, Hamad Bin Khalifa University & University of Turku, Doha, Qatar)Kathleen Guan (Georgetown University, Washington, DC, USA)Soon-Gyo Jung (Qatar Computing Research Institute, Hamad Bin Khalifa University, Doha, Qatar)Shammur A. Chowdhury (Qatar Computing Research Institute, Hamad Bin Khalifa University, Doha, Qatar)Bernard J. Jansen (Qatar Computing Research Institute, Hamad Bin Khalifa University, Doha, Qatar)
Quantitative persona creation (QPC) has tremendous potential, as HCI researchers and practitioners can leverage user data from online analytics and digital media platforms to better understand their users and customers. However, there is a lack of a systematic overview of the QPC methods and progress made, with no standard methodology or known best practices. To address this gap, we review 49 QPC research articles from 2005 to 2019. Results indicate three stages of QPC research: Emergence, Diversification, and Sophistication. Sharing resources, such as datasets, code, and algorithms, is crucial to achieving the next stage (Maturity). For practitioners, we provide guiding questions for assessing QPC readiness in organizations.
2
"How do I make this thing smile?": An Inventory of Expressive Nonverbal Communication in Commercial Social Virtual Reality Platforms
Theresa Jean Tanenbaum (University of California, Irvine, Irvine, CA, USA)Nazely Hartoonian (University of California, Irvine, Irvine, CA, USA)Jeffrey Bryan (University of California, Irvine, Irvine, CA, USA)
Despite the proliferation of platforms for social Virtual Reality (VR) communicating emotional expression via an avatar remains a significant design challenge. In order to better understand the design space for expressive Nonverbal Communication (NVC) in social VR we undertook an inventory of the ten most prominent social VR platforms. Our inventory identifies the dominant design strategies for movement, facial control, and gesture in commercial VR applications, and identifies opportunities and challenges for future design and research into social expression in VR. Specifically, we highlight the paucity of interaction paradigms for facial expression and the near nonexistence of meaningful control over ambient aspects of nonverbal communication such as posture, pose, and social status.
2
Designing and Evaluating 'In the Same Boat', A Game of Embodied Synchronization for Enhancing Social Play
Raquel Breejon Robinson (University of Saskatchewan, Saskatoon, SK, Canada)Elizabeth Reid (University of Saskatchewan, Saskatoon, SK, Canada)James Collin Fey (University of California, Santa Cruz, Santa Cruz, CA, USA)Ansgar E. Depping (University of Saskatchewan, Saskatoon, SK, Canada)Katherine Isbister (University of California, Santa Cruz, Santa Cruz, CA, USA)Regan L. Mandryk (University of Saskatchewan, Saskatoon, SK, Canada)
Social closeness is important for health and well-being, but is difficult to maintain over a distance. Games can help connect people by strengthening existing relationships or creating new ones through shared playful experiences. We present the design and evaluation of 'In the Same Boat' (ITSB), a two-player infinite runner designed to foster social closeness in distributed dyads. ITSB leverages the synchronization of both players' input to steer a canoe down a river and avoid obstacles. We created two versions: embodied controls, which use players' physiological signals (breath rate, facial expressions), and standard keyboard controls. Results from a study with 35 dyads indicate that ITSB fostered affiliation, and while embodied controls were less intuitive, people enjoyed them more. Further, photos of the dyads were rated as happier and closer in the embodied condition, indicating the potential of embodied controls to foster social closeness in synchronized play over a distance.
2
BodyLights: Open-Ended Augmented Feedback to Support Training Towards a Correct Exercise Execution
Laia Turmo Vidal (Uppsala University, Uppsala, Sweden)Hui Zhu (Uppsala University, Uppsala, Sweden)Abraham Riego-Delgado (Uppsala University, Uppsala, Sweden)
Technologies targeting a correct execution of physical training exercises typically use pre-determined models for what they consider correct, automatizing instruction and feedback. This falls short on catering to diverse trainees and exercises. We explore an alternative design approach, in which technology provides open-ended feedback for trainers and trainees to use during training. With a personal trainer we designed the augmentation of 18 strength training exercises with BodyLights: 3D printed wearable projecting lights that augment body movement and orientation. To study them, 15 trainees at different skill levels trained three times with our personal trainer and BodyLights. Our findings show that BodyLights catered to a wide range of trainees and exercises, and supported understanding, executing and correcting diverse technique parameters. We discuss design features and methodological aspects that allowed this; and what open-ended feedback offered in comparison to current technology approaches to support training towards a correct exercise execution.
1
Improving Crowd-Supported GUI Testing with Structural Guidance
Yan Chen (University of Michigan, Ann Arbor, MI, USA)Maulishree Pandey (University of Michigan, Ann Arbor, MI, USA)Jean Y. Song (University of Michigan, Ann Arbor, MI, USA)Walter S. Lasecki (University of Michigan, Ann Arbor, MI, USA)Steve Oney (University of Michigan, Ann Arbor, MI, USA)
Crowd testing is an emerging practice in Graphical User Interface (GUI) testing, where developers recruit a large number of crowd testers to test GUI features. It is often easier and faster than a dedicated quality assurance team, and its output is more realistic than that of automated testing. However, crowds of testers working in parallel tend to focus on a small set of commonly-used User Interface (UI) navigation paths, which can lead to low test coverage and redundant effort. In this paper, we introduce two techniques to increase crowd testers' coverage: interactive event-flow graphs and GUI-level guidance. The interactive event-flow graphs track and aggregate every tester's interactions into a single directed graph that visualizes the cases that have already been explored. Crowd testers can interact with the graphs to find new navigation paths and increase the coverage of the created tests. We also use the graphs to augment the GUI (GUI-level guidance) to help testers avoid only exploring common paths. Our evaluation with 30 crowd testers on 11 different test pages shows that the techniques can help testers avoid redundant effort while also increasing untrained testers' coverage by 55%. These techniques can help us develop more robust software that works in more mission-critical settings not only by performing more thorough testing with the same effort that has been put in before but also by integrating them into different parts of the development pipeline to make more reliable software in the early development stage.
1
Embodied Learning in Immersive Smart Spaces
Mirko Gelsomini (Politecnico di Milano, Milano, Italy)Giulia Leonardi (Politecnico di Milano, Milano, Italy)Franca Garzotto (Politecnico di Milano, Milan, Italy)
This paper presents the design and evaluation of IMAGINE, a novel interactive immersive smart space for embodied learning. In IMAGINE children use full-body movements and gestures to interact with multimedia educational contents projected on the wall and on the floor, while synchronized light effects enhance immersivity. A controlled study performed at a primary school with 48 children aged 6-8 highlights the educational potential of an immersive embodied solution, also compared to traditional teaching methods, and draws some implications for smart-space technology adoption in educational contexts.
1
Improving Virtual Reality Ergonomics Through Reach-Bounded Non-Linear Input Amplification
Johann Wentzel (University of Waterloo, Waterloo, ON, Canada)Greg d'Eon (Universiy of British Columbia, Vancouver, BC, Canada)Daniel Vogel (University of Waterloo, Waterloo, ON, Canada)
Input amplification enables easier movement in virtual reality (VR) for users with mobility issues or in confined spaces. However, current techniques either do not focus on maintaining feelings of body ownership, or are not applicable to general VR tasks. We investigate a general purpose non-linear transfer function that keeps the user's reach within reasonable bounds to maintain body ownership. The technique amplifies smaller movements from a user-definable neutral point into the expected larger movements using a configurable Hermite curve. Two experiments evaluate the approach. The first establishes that the technique has comparable performance to the state-of-the-art, increasing physical comfort while maintaining task performance and body ownership. The second explores the characteristics of the technique over a wide range of amplification levels. Using the combined results, design and implementation recommendations are provided with potential applications to related VR transfer functions.
1
Scout: Rapid Exploration of Interface Layout Alternatives through High-Level Design Constraints
Amanda Swearngin (University of Washington, Seattle, WA, USA)Chenglong Wang (University of Washington, Seattle, WA, USA)Alannah Oleson (University of Washington, Seattle, WA, USA)James Fogarty (University of Washington, Seattle, WA, USA)Amy J. Ko (University of Washington, Seattle, WA, USA)
Although exploring alternatives is fundamental to creating better interface designs, current processes for creating alternatives are generally manual, limiting the alternatives a designer can explore. We present Scout, a system that helps designers rapidly explore alternatives through mixed-initiative interaction with high-level constraints and design feedback. Prior constraint-based layout systems use low-level spatial constraints and generally produce a single design. Tosupport designer exploration of alternatives, Scout introduces high-level constraints based on design concepts (e.g.,~semantic structure, emphasis, order) and formalizes them into low-level spatial constraints that a solver uses to generate potential layouts. In an evaluation with 18 interface designers, we found that Scout: (1) helps designers create more spatially diverse layouts with similar quality to those created with a baseline tool and (2) can help designers avoid a linear design process and quickly ideate layouts they do not believe they would have thought of on their own.
1
Progression Maps: Conceptualizing Narrative Structure for Interaction Design Support
Elin Carstensdottir (Northeastern University, Boston, MA, USA)Nathan Partlan (Northeastern University, Boston, MA, USA)Steven Sutherland (University of Houston-Clear Lake, Houston, TX, USA)Tyler Duke (University of Houston-Clear Lake, Houston, TX, USA)Erika Ferris (University of Houston-Clear Lake, Houston, TX, USA)Robin M. Richter (University of Houston-Clear Lake, Houston, TX, USA)Maria Valladares (University of Houston Clear Lake, Houston, TX, USA)Magy Seif El-Nasr (Northeastern University, Boston, MA, USA)
Interactive narratives are frequently designed for learning and training applications, such as social training. In these contexts, designers may be inexperienced in storytelling and interaction design, and it may be difficult to quickly build an effective experience, even for experienced designers. Designers often approach this problem through iterative design. To augment and reduce iteration, we argue for the utility of employing models to reason about, evaluate, and improve designs. While there has been much previous work on interactive narrative models, none of them capture aspects of the interaction design necessary for testing and evaluation. In this paper we propose a new computational model called Progression Maps, which abstracts interaction design elements of the narrative's structure and visualizes its interaction properties. We report on the model, its implementation, and two studies evaluating its use. Our results demonstrate Progression Maps' effectiveness in communicating the underlying design through an easily understandable visualization.
1
RoomShift: Room-scale Dynamic Haptics for VR with Furniture-moving Swarm Robots
Ryo Suzuki (University of Colorado Boulder, Boulder, CO, USA)Hooman Hedayati (University of Colorado Boulder, Boulder, CO, USA)Clement Zheng (University of Colorado Boulder, Boulder, CO, USA)James L. Bohn (University of Colorado Boulder, Boulder, CO, USA)Daniel Szafir (University of Colorado Boulder & ATLAS Institute, Boulder, CO, USA)Ellen Yi-Luen Do (University of Colorado Boulder & ATLAS Institute, Boulder, CO, USA)Mark D. Gross (University of Colorado Boulder & ATLAS Institute, Boulder, CO, USA)Daniel Leithinger (University of Colorado Boulder & ATLAS Institute, Boulder, CO, USA)
RoomShift is a room-scale dynamic haptic environment for virtual reality, using a small swarm of robots that can move furniture. RoomShift consists of nine shape-changing robots: Roombas with mechanical scissor lifts. These robots drive beneath a piece of furniture to lift, move and place it. By augmenting virtual scenes with physical objects, users can sit on, lean against, place and otherwise interact with furniture with their whole body; just as in the real world. When the virtual scene changes or users navigate within it, the swarm of robots dynamically reconfigures the physical environment to match the virtual content. We describe the hardware and software implementation, applications in virtual tours and architectural design and interaction techniques.
1
Informing the Design of Privacy-Empowering Tools for the Connected Home
William Seymour (University of Oxford, Oxford, United Kingdom)Martin J. Kraemer (University of Oxford, Oxford, United Kingdom)Reuben Binns (University of Oxford, Oxford, United Kingdom)Max Van Kleek (University of Oxford, Oxford, United Kingdom)
Connected devices in the home represent a potentially grave new privacy threat due to their unfettered access to the most personal spaces in people's lives. Prior work has shown that despite concerns about such devices, people often lack sufficient awareness, understanding, or means of taking effective action. To explore the potential for new tools that support such needs directly we developed Aretha, a privacy assistant technology probe that combines a network disaggregator, personal tutor, and firewall, to empower end-users with both the knowledge and mechanisms to control disclosures from their homes. We deployed Aretha in three households over six weeks, with the aim of understanding how this combination of capabilities might enable users to gain awareness of data disclosures by their devices, form educated privacy preferences, and to block unwanted data flows. The probe, with its novel affordances—and its limitations—prompted users to co-adapt, finding new control mechanisms and suggesting new approaches to address the challenge of regaining privacy in the connected home.
1
Novice-AI Music Co-Creation via AI-Steering Tools for Deep Generative Models
Ryan Louie (Northwestern University, Evanston, IL, USA)Andy Coenen (Google Research, Mountain View, CA, USA)Cheng Zhi Huang (Independent Researcher, Mountain View, CA, USA)Michael Terry (Google Research, Cambridge, MA, USA)Carrie J. Cai (Google Research, Mountain View, CA, USA)
While generative deep neural networks (DNNs) have demonstrated their capacity for creating novel musical compositions, less attention has been paid to the challenges and potential of co-creating with these musical AIs, especially for novices. In a needfinding study with a widely used, interactive musical AI, we found that the AI can overwhelm users with the amount of musical content it generates, and frustrate them with its non-deterministic output. To better match co-creation needs, we developed AI-steering tools, consisting of Voice Lanes that restrict content generation to particular voices; Example-Based Sliders to control the similarity of generated content to an existing example; Semantic Sliders to nudge music generation in high-level directions (happy/sad, conventional/surprising); and Multiple Alternatives of generated content to audition and choose from. In a summative study (N=21), we discovered the tools not only increased users' trust, control, comprehension, and sense of collaboration with the AI, but also contributed to a greater sense of self-efficacy and ownership of the composition relative to the AI.
1
See, Feel, Move: Player Behaviour Analysis through Combined Visualization of Gaze, Emotions, and Movement
Daniel Kepplinger (University of Applied Sciences Upper Austria, Hagenberg, Austria)Günter Wallner (Eindhoven University of Technology, Eindhoven, Netherlands)Simone Kriglstein (University of Vienna & AIT Austrian Institute of Technology GmbH, Vienna, Austria)Michael Lankes (University of Applied Sciences Upper Austria, Hagenberg, Austria)
Playtesting of games often relies on a mixed-methods approach to obtain more holistic insights about and, in turn, improve the player experience. However, triangulating the different data sources and visualizing them in an integrated manner such that they contextualize each other still proves challenging. Despite its potential value for gauging player behaviour, this area of research continues to be underexplored. In this paper, we propose a visualization approach that combines commonly tracked movement data with - from a visualization perspective rarely considered - gaze behaviour and emotional responses. We evaluated our approach through a qualitative expert study with five professional game developers. Our results show that both the individual visualization of gaze, emotions, and movement but especially their combination are valuable to understand and form hypotheses about player behaviour. At the same time, our results stress that careful attention needs to be paid to ensure that the visualization remains legible and does not obfuscate information.
1
Connecting Distributed Families: Camera Work for Three-party Mobile Video Calls
Yumei Gan (The Chinese University of Hong Kong, Hong Kong, China)Christian Greiffenhagen (The Chinese University of Hong Kong, Hong Kong, China)Stuart Reeves (University of Nottingham, Nottingham, Nottinghamshire, United Kingdom)
Mobile video calling technologies have become a critical link to connect distributed families. However, these technologies have been principally designed for video calling between two parties, whereas family video calls involve young children often comprise three parties, namely a co-present adult (a parent or grandparent) helping with the interaction between the child and another remote adult. We examine how manipulation of phone cameras and management of co-present children is used to stage parent-child interactions. We present results from a video-ethnographic study based on 40 video recordings of video calls between 'left-behind' children and their migrant parents in China. Our analysis reveals a key practice of 'facilitation work', performed by grandparents, as a crucial feature of three-party calls. Facilitation work offers a new concept for HCI's broader conceptualisation of mobile video calling, suggesting revisions that design might take into consideration for triadic interactions in general.
1
Relationship Between Visual Complexity and Aesthetics of Webpages
Aliaksei Miniukovich (University of Trento, Trento, Italy)Maurizio Marchese (University of Trento, Trento, Italy)
Substantial HCI research investigated the relationship between webpage complexity and aesthetics, but without a definitive conclusion. Some research showed an inverse linear correlation, some other showed an inverted u-shaped curve, while the rest showed no relationship at all. Such a lack of clarity complicates hypothesis formulation and result interpretation for future research, and lowers the reliability and generalizability of potential advice for Web design practice. We re-collected complexity and aesthetics ratings for five datasets previously used in webpage aesthetics and complexity research. The results were mixed, but suggested an inverse linear relationship with a weaker u-shaped sub-component. A subsequent visual inspection of revealed several confounding factors that may have led to the mixed results, including some webpages looking broken or archaic. The second data collection showed that accounting for these factors generally eliminates the u-shaped tendency of the complexity-aesthetics relationship, at least, for a relatively homogeneous sample of English-speaking participants.
1
CheXplain: Enabling Physicians to Explore and Understand Data-Driven, AI-Enabled Medical Imaging Analysis
Yao Xie (University of California, Los Angeles, Los Angeles, CA, USA)Melody Chen (University of California, Los Angeles, Los Angeles, CA, USA)David Kao (University of California, Los Angeles, Los Angeles, CA, USA)Ge Gao (University of Maryland, College Park, MD, USA)Xiang 'Anthony' Chen (University of California, Los Angeles, Los Angeles, CA, USA)
The recent development of data-driven AI promises to automate medical diagnosis; however, most AI functions as 'black boxes' to physicians with limited computational knowledge. Using medical imaging as a point of departure, we conducted three iterations of design activities to formulate CheXplain — a system that enables physicians to explore and understand AI-enabled chest X-ray analysis: (i) a paired survey between referring physicians and radiologists reveals whether, when, and what kinds of explanations are needed; (ii) a low-fidelity prototype co-designed with three physicians formulates eight key features; and (iii) a high-fidelity prototype evaluated by another six physicians provides detailed summative insights on how each feature enables the exploration and understanding of AI. We summarize by discussing recommendations for future work to design and implement explainable medical AI systems that encompass four recurring themes: motivation, constraint, explanation, and justification.
1
TandemTrack: Shaping Consistent Exercise Experience by Complementing a Mobile App with a Smart Speaker
Yuhan Luo (University of Maryland, College Park, MD, USA)Bongshin Lee (Microsoft Research, Redmond, WA, USA)Eun Kyoung Choe (University of Maryland, College Park, MD, USA)
Smart speakers such as Amazon Echo present promising opportunities for exploring voice interaction in the domain of in-home exercise tracking. In this work, we examine if and how voice interaction complements and augments a mobile app in promoting consistent exercise. We designed and developed TandemTrack, which combines a mobile app and an Alexa skill to support exercise regimen, data capture, feedback, and reminder. We then conducted a four-week between-subjects study deploying TandemTrack to 22 participants who were instructed to follow a short daily exercise regimen: one group used only the mobile app and the other group used both the app and the skill. We collected rich data on individuals' exercise adherence and performance, and their use of voice and visual interactions, while examining how TandemTrack as a whole influenced their exercise experience. Reflecting on these data, we discuss the benefits and challenges of incorporating voice interaction to assist daily exercise, and implications for designing effective multimodal systems to support self-tracking and promote consistent exercise.
1
Outline Pursuits: Gaze-assisted Selection of Occluded Objects in Virtual Reality
Ludwig Sidenmark (Lancaster University, Lancaster, United Kingdom)Christopher Clarke (Lancaster University, Lancaster, United Kingdom)Xuesong Zhang (Katholieke Universiteit Leuven, Leuven, Belgium)Jenny Phu (Ludwig Maximilian University of Munich, Munich, Germany)Hans Gellersen (Aarhus University, Aarhus, Denmark)
In 3D environments, objects can be difficult to select when they overlap, as this affects available target area and increases selection ambiguity. We introduce Outline Pursuits which extends a primary pointing modality for gaze-assisted selection of occluded objects. Candidate targets within a pointing cone are presented with an outline that is traversed by a moving stimulus. This affords completion of the selection by gaze attention to the intended target's outline motion, detected by matching the user's smooth pursuit eye movement. We demonstrate two techniques implemented based on the concept, one with a controller as the primary pointer, and one in which Outline Pursuits are combined with head pointing for hands-free selection. Compared with conventional raycasting, the techniques require less movement for selection as users do not need to reposition themselves for a better line of sight, and selection time and accuracy are less affected when targets become highly occluded.
1
Gripmarks: Using Hand Grips to Transform In-Hand Objects into Mixed Reality Input
Qian Zhou (Facebook Reality Labs & University of British Columbia, Redmond, WA, USA)Sarah Sykes (Facebook Reality Labs, Redmond, WA, USA)Sidney Fels (University of British Columbia, Vancouver, BC, Canada)Kenrick Kin (Facebook Reality Labs, Redmond, WA, USA)
We introduce Gripmarks, a system that enables users to opportunistically use objects they are already holding as input surfaces for mixed reality head-mounted displays (HMD). Leveraging handheld objects reduces the need for users to free up their hands or acquire a controller to interact with their HMD. Gripmarks associate a particular hand grip with the shape primitive of the physical object without the need of object recognition or instrumenting the object. From the grip pose and shape primitive we can infer the surface of the object. With an activation gesture, we can enable the object for use as input to the HMD. With five gripmarks we demonstrate a recognition rate of 94.2%; we show that our grip detection benefits from the physical constraints of holding an object. We explore two categories of input objects 1) tangible surfaces and 2) tangible tools and present two representative applications. We discuss the design and technical challenges for expanding the concept.
1
Designing Ambient Narrative-Based Interfaces to Reflect and Motivate Physical Activity
Elizabeth L. Murnane (Stanford University, Stanford, CA, USA)Xin Jiang (Stanford University, Stanford, CA, USA)Anna Kong (Stanford University, Stanford, CA, USA)Michelle Park (Stanford University, Stanford, CA, USA)Weili Shi (Stanford University, Stanford, CA, USA)Connor Soohoo (Stanford University, Stanford, CA, USA)Luke Vink (Stanford University, Stanford, CA, USA)Iris Xia (Stanford University, Stanford, CA, USA)Xin Yu (Stanford University, Stanford, CA, USA)John Yang-Sammataro (Stanford University, Stanford, CA, USA)Grace Young (Stanford University, Stanford, CA, USA)Jenny Zhi (Stanford University, Stanford, CA, USA)Paula Moya (Stanford University, Stanford, CA, USA)James A. Landay (Stanford University, Stanford, CA, USA)
Numerous technologies now exist for promoting more active lifestyles. However, while quantitative data representations (e.g., charts, graphs, and statistical reports) typify most health tools, growing evidence suggests such feedback can not only fail to motivate behavior but may also harm self-integrity and fuel negative mindsets about exercise. Our research seeks to devise alternative, more qualitative schemes for encoding personal information. In particular, this paper explores the design of data-driven narratives, given the intuitive and persuasive power of stories. We present WhoIsZuki, a smartphone application that visualizes physical activities and goals as components of a multi-chapter quest, where the main character's progress is tied to the user's. We report on our design process involving online surveys, in-lab studies, and in-the-wild deployments, aimed at refining the interface and the narrative and gaining a deep understanding of people's experiences with this type of feedback. From these insights, we contribute recommendations to guide future development of narrative-based applications for motivating healthy behavior.