注目の論文一覧

各カテゴリ上位30論文までを表示しています

The ACM CHI Conference on Human Factors in Computing Systems (https://chi2020.acm.org/)

21
Enhancing Mobile Voice Assistants with WorldGaze
Sven Mayer (Carnegie Mellon University, Pittsburgh, PA, USA)Gierad Laput (Apple Inc. & Carnegie Mellon University, Cupertino, CA, USA)Chris Harrison (Carnegie Mellon University, Pittsburgh, PA, USA)
Contemporary voice assistants require that objects of inter-est be specified in spoken commands. Of course, users are often looking directly at the object or place of interest – fine-grained, contextual information that is currently unused. We present WorldGaze, a software-only method for smartphones that provides the real-world gaze location of a user that voice agents can utilize for rapid, natural, and precise interactions. We achieve this by simultaneously opening the front and rear cameras of a smartphone. The front-facing camera is used to track the head in 3D, including estimating its direction vector. As the geometry of the front and back cameras are fixed and known, we can raycast the head vector into the 3D world scene as captured by the rear-facing camera. This allows the user to intuitively define an object or region of interest using their head gaze. We started our investigations with a qualitative exploration of competing methods, before developing a functional, real-time implementation. We conclude with an evaluation that shows WorldGaze can be quick and accurate, opening new multimodal gaze+voice interactions for mobile voice agents.
20
EarBuddy: Enabling On-Face Interaction via Wireless Earbuds
Xuhai Xu (University of Washington & Tsinghua University, Seattle, WA, USA)Haitian Shi (Tsinghua University & University of Washington, Seattle, WA, USA)Xin Yi (Tsinghua University & Key Laboratory of Pervasive Computing, Ministry of Education, Beijing, China)WenJia Liu (Beijing University of Posts and Telecommunications, Beijing, China)Yukang Yan (Tsinghua University, Beijing, China)Yuanchun Shi (Tsinghua University & Key Laboratory of Pervasive Computing, Ministry of Education, Beijing, China)Alex Mariakakis (University of Washington, Seattle, WA, USA)Jennifer Mankoff (University of Washington, Seattle, WA, USA)Anind K. Dey (University of Washington, Seattle, WA, USA)
Past research regarding on-body interaction typically requires custom sensors, limiting their scalability and generalizability. We propose EarBuddy, a real-time system that leverages the microphone in commercial wireless earbuds to detect tapping and sliding gestures near the face and ears. We develop a design space to generate 27 valid gestures and conducted a user study (N=16) to select the eight gestures that were optimal for both human preference and microphone detectability. We collected a dataset on those eight gestures (N=20) and trained deep learning models for gesture detection and classification. Our optimized classifier achieved an accuracy of 95.3%. Finally, we conducted a user study (N=12) to evaluate EarBuddy's usability. Our results show that EarBuddy can facilitate novel interaction and that users feel very positively about the system. EarBuddy provides a new eyes-free, socially acceptable input method that is compatible with commercial wireless earbuds and has the potential for scalability and generalizability
20
Defining Haptic Experience: Foundations for Understanding, Communicating, and Evaluating HX
Erin Kim (University of Waterloo, Waterloo, ON, Canada)Oliver Schneider (University of Waterloo, Waterloo, ON, Canada)
Haptic technology is maturing, with expectations and evidence that it will contribute to user experience (UX). However, we have very little understanding about how haptic technology can influence people's experience. Researchers and designers need a way to understand, communicate, and evaluate haptic technology's effect on UX. From a literature review and two studies – one with haptics novices, the other with expert hapticians – we developed a theoretical model of the factors that constitute a good haptic experience (HX). We define HX and propose its constituent factors: design parameters of Timeliness, Density, Intensity, and Timbre; the cross-cutting concern of Personalization; usability requirements of Utility, Causality, Consistency, and Saliency; and experiential factors of Harmony, Expressivity, Autotelics, Immersion, and Realism as guiding constructs important for haptic experience. This model will help guide design and research of haptic systems, inform language around haptics, and provide the basis for evaluative instruments, such as checklists, heuristics, or questionnaires.
19
When Design Novices and LEGO^(®) Meet: Stimulating Creative Thinking for Interface Design
Simon Bourdeau (ESG-UQAM, Montréal, PQ, Canada)Annemarie Lesage (HEC Montréal, Montréal, PQ, Canada)Béatrice Caron (HEC Montréal, Montréal, PQ, Canada)Pierre-Majorique Léger (HEC Montréal, Montréal, PQ, Canada)
Design thinking is an iterative, human-centered approach to innovation. Its success rests on collaboration within a multidisciplinary project team going through cycles of divergent and convergent ideations. In these teams, nondesigners risk diminishing the divergent reach because they are generally reluctant to sketch, thus missing out on theambiguous, imprecise early conceptual divergent phases. We hypothesized that LEGO^(®) could advantageously be a substitute to sketching. In this comparative study, 44 nondesigners randomly paired in 22 dyads did two conceptual ideations of healthcare landing pages, one using pen/paper (spontaneously writing words on sticky notes) and the other using LEGO, assessed through Torrance and Guilford frameworks for divergent thinking. Results show that LEGO interfaces gathered significantly higher divergent thinking scores because their concepts were significantly more elaborated. Furthermore, when using LEGO, teams who generated more elements were likely to also generate more ideas, more categories of ideas and more original ideas.
19
Trigeminal-based Temperature Illusions
Jas Brooks (University of Chicago, Chicago, IL, USA)Steven Nagels (University of Chicago, Chicago, IL, USA)Pedro Lopes (University of Chicago, Chicago, IL, USA)
We explore a temperature illusion that uses low-powered electronics and enables the miniaturization of simple warm and cool sensations. Our illusion relies on the properties of certain scents, such as the coolness of mint or hotness of peppers. These odors trigger not only the olfactory bulb, but also the nose's trigeminal nerve, which has receptors that respond to both temperature and chemicals. To exploit this, we engineered a wearable device based on micropumps and an atomizer that emits up to three custom-made "thermal" scents directly to the user's nose. Breathing in these scents causes the user to feel warmer or cooler. We demonstrate how our device renders warmth and cooling sensations in virtual experiences. In our first study, we evaluated six candidate "thermal" scents. We found two hot-cold pairs, with one pair being less identifiable by odor. In our second study, pParticipants rated VR experiences with our device trigeminal stimulants as significantly warmer or cooler than the baseline conditions. Lastly, we believe this offers an alternative to existing thermal feedback devices, which unfortunately rely on power-hungry heat-lamps or Peltier-elements.
18
Social Acceptability in HCI: A Survey of Methods, Measures, and Design Strategies
Marion Koelle (University of Oldenburg & Saarland University, Saarland Informatics Campus, Oldenburg & Saarbrücken, Germany)Swamy Ananthanarayan (University of Oldenburg, Oldenburg, Germany)Susanne Boll (University of Oldenburg, Oldenburg, Germany)
With the increasing ubiquity of personal devices, social acceptability of human-machine interactions has gained relevance and growing interest from the HCI community. Yet, there are no best practices or established methods for evaluating social acceptability. Design strategies for increasing social acceptability have been described and employed, but so far not been holistically appraised and evaluated. We offer a systematic literature analysis (N=69) of social acceptability in HCI and contribute a better understanding of current research practices, namely, methods employed, measures and design strategies. Our review identified an unbalanced distribution of study approaches, shortcomings in employed measures, and a lack of interweaving between empirical and artifact-creating approaches. The latter causes a discrepancy between design recommendations based on user research, and design strategies employed in artifact creation. Our survey lays the groundwork for a more nuanced evaluation of social acceptability, the development of best practices, and a future research agenda.
18
Race Yourselves: A Longitudinal Exploration of Self-Competition Between Past, Present, and Future Performances in a VR Exergame
Alexander Michael (University of Bath, Bath, United Kingdom)Christof Lutteroth (University of Bath, Bath, United Kingdom)
Participating in competitive races can be a thrilling experience for athletes, involving a rush of excitement and sensations of flow, achievement, and self-fulfilment. However, for non-athletes, the prospect of competition is often a scary one which affects intrinsic motivation negatively, especially for less fit, less competitive individuals. We propose a novel method making the positive racing experience accessible to non-athletes using a high-intensity cycling VR exergame: by recording and replaying all their previous gameplay sessions simultaneously, including a projected future performance, players can race against a crowd of "ghost" avatars representing their individual fitness journey. The experience stays relevant and exciting as every race adds a new competitor. A longitudinal study over four weeks and a cross-sectional study found that the new method improves physical performance, intrinsic motivation, and flow compared to a non-competitive exergame. Additionally, the longitudinal study provides insights into the longer-term effects of VR exergames.
17
RCEA: Real-time, Continuous Emotion Annotation for Collecting Precise Mobile Video Ground Truth Labels
Tianyi Zhang (Centrum Wiskunde & Informatica and Delft University of Technology, Amsterdam & Delft, Netherlands)Abdallah El Ali (Centrum Wiskunde & Informatica (CWI), Amsterdam, Netherlands)Chen Wang (Xinhuanet, Beijing, China)Alan Hanjalic (Delft University of Technology, Delft, Netherlands)Pablo Cesar (Centrum Wiskunde & Informatica and Delft University of Technology, Amsterdam & Delft, Netherlands)
Collecting accurate and precise emotion ground truth labels for mobile video watching is essential for ensuring meaningful predictions. However, video-based emotion annotation techniques either rely on post-stimulus discrete self-reports, or allow real-time, continuous emotion annotations (RCEA) only for desktop settings. Following a user-centric approach, we designed an RCEA technique for mobile video watching, and validated its usability and reliability in a controlled, indoor (N=12) and later outdoor (N=20) study. Drawing on physiological measures, interaction logs, and subjective workload reports, we show that (1) RCEA is perceived to be usable for annotating emotions while mobile video watching, without increasing users' mental workload (2) the resulting time-variant annotations are comparable with intended emotion attributes of the video stimuli (classification error for valence: 8.3%; arousal: 25%). We contribute a validated annotation technique and associated annotation fusion method, that is suitable for collecting fine-grained emotion annotations while users watch mobile videos.
17
A View on the Viewer: Gaze-Adaptive Captions for Videos
Kuno Kurzhals (ETH Zürich, Zürich, Switzerland)Fabian Göbel (ETH Zürich, Zürich, Switzerland)Katrin Angerbauer (University of Stuttgart, Stuttgart, Germany)Michael Sedlmair (University of Stuttgart, Stuttgart, Germany)Martin Raubal (ETH Zürich, Zürich, Switzerland)
Subtitles play a crucial role in cross-lingual distribution of multimedia content and help communicate information where auditory content is not feasible (loud environments, hearing impairments, unknown languages). Established methods utilize text at the bottom of the screen, which may distract from the video. Alternative techniques place captions closer to related content (e.g., faces) but are not applicable to arbitrary videos such as documentations. Hence, we propose to leverage live gaze as indirect input method to adapt captions to individual viewing behavior. We implemented two gaze-adaptive methods and compared them in a user study (n=54) to traditional captions and audio-only videos. The results show that viewers with less experience with captions prefer our gaze-adaptive methods as they assist them in reading. Furthermore, gaze distributions resulting from our methods are closer to natural viewing behavior compared to the traditional approach. Based on these results, we provide design implications for gaze-adaptive captions.
16
AirTouch: 3D-printed Touch-Sensitive Objects Using Pneumatic Sensing
Carlos E. Tejada (University of Copenhagen, Copenhagen, Denmark)Raf Ramakers (Hasselt University, Hasselt, Belgium)Sebastian Boring (Aalborg University, Copenhagen, Denmark)Daniel Ashbrook (University of Copenhagen, Copenhagen, Denmark)
3D printing technology can be used to rapidly prototype the look and feel of 3D objects. However, the objects produced are passive. There has been increasing interest in making these objects interactive, yet they often require assembling components or complex calibration. In this paper, we contribute AirTouch, a technique that enables designers to fabricate touch-sensitive objects with minimal assembly and calibration using pneumatic sensing. AirTouch-enabled objects are 3D printed as a single structure using a consumer-level 3D printer. AirTouch uses pre-trained machine learning models to identify interactions with fabricated objects, meaning that there is no calibration required once the object has completed printing. We evaluate our technique using fabricated objects with various geometries and touch sensitive locations, obtaining accuracies of at least 90% with 12 interactive locations.
16
HiveFive: Immersion Preserving Attention Guidance in Virtual Reality
Daniel Lange (University of Oldenburg, Oldenburg, Germany)Tim Claudius Stratmann (OFFIS - Institute for Information Technology, Oldenburg, Germany)Uwe Gruenefeld (OFFIS - Institute for Information Technology, Oldenburg, Germany)Susanne Boll (University of Oldenburg, Oldenburg, Germany)
Recent advances in Virtual Reality (VR) technology, such as larger fields of view, have made VR increasingly immersive. However, a larger field of view often results in a user focusing on certain directions and missing relevant content presented elsewhere on the screen. With HiveFive, we propose a technique that uses swarm motion to guide user attention in VR. The goal is to seamlessly integrate directional cues into the scene without losing immersiveness. We evaluate HiveFive in two studies. First, we compare biological motion (from a prerecorded swarm) with non-biological motion (from an algorithm), finding further evidence that humans can distinguish between these motion types and that, contrary to our hypothesis, non-biological swarm motion results in significantly faster response times. Second, we compare HiveFive to four other techniques and show that it not only results in fast response times but also has the smallest negative effect on immersion.
16
Gripmarks: Using Hand Grips to Transform In-Hand Objects into Mixed Reality Input
Qian Zhou (Facebook Reality Labs & University of British Columbia, Redmond, WA, USA)Sarah Sykes (Facebook Reality Labs, Redmond, WA, USA)Sidney Fels (University of British Columbia, Vancouver, BC, Canada)Kenrick Kin (Facebook Reality Labs, Redmond, WA, USA)
We introduce Gripmarks, a system that enables users to opportunistically use objects they are already holding as input surfaces for mixed reality head-mounted displays (HMD). Leveraging handheld objects reduces the need for users to free up their hands or acquire a controller to interact with their HMD. Gripmarks associate a particular hand grip with the shape primitive of the physical object without the need of object recognition or instrumenting the object. From the grip pose and shape primitive we can infer the surface of the object. With an activation gesture, we can enable the object for use as input to the HMD. With five gripmarks we demonstrate a recognition rate of 94.2%; we show that our grip detection benefits from the physical constraints of holding an object. We explore two categories of input objects 1) tangible surfaces and 2) tangible tools and present two representative applications. We discuss the design and technical challenges for expanding the concept.
16
E-Textile Microinteractions: Augmenting Twist with Flick, Slide and Grasp Gestures for Soft Electronics
Alex Olwal (Google Research, Mountain View, CA, USA)Thad Starner (Google Research, Mountain View, CA, USA)Gowa Mainini (Google Research, Mountain View, CA, USA)
E-textile microinteractions advance cord-based interfaces by enabling the simultaneous use of precise continuous control and casual discrete gestures. We leverage the recently introduced I/O Braid sensing architecture to enable a series of user studies and experiments which help design suitable interactions and a real-time gesture recognition pipeline. Informed by a gesture elicitation study with 36 participants, we developed a user-dependent classifier for eight discrete gestures with 94% accuracy for 12 participants. In a formal evaluation we show that we can enable precise manipulation with the same architecture. Our quantitative targeting experiment suggests that twisting is faster than existing headphone button controls and is comparable in speed to a capacitive touch surface. Qualitative interview feedback indicates a preference for I/O Braid's interaction over that of in-line headphone controls. Our applications demonstrate how continuous and discrete gestures can be combined to form new, integrated e-textile microinteraction techniques for real-time continuous control, discrete actions and mode switching.
16
HeadReach: Utilizing Head Tracking to Address Reachability Issues on Mobile Touch Devices
Simon Voelker (RWTH Aachen University, Aachen, Germany)Sebastian Hueber (RWTH Aachen University, Aachen, Germany)Christian Corsten (RWTH Aachen University, Aachen, Germany)Christian Remy (Aarhus University, Aarhus, Denmark)
People often operate their smartphones with only one hand, using just their thumb for touch input. With today's larger smartphones, this leads to a reachability issue: Users can no longer comfortably touch everywhere on the screen without changing their grip. We investigate using the head tracking in modern smartphones to address this reachability issue. We developed three interaction techniques, pure head (PH), head + touch (HT), and head area + touch (HA), to select targets beyond the reach of one's thumb. In two user studies, we found that selecting targets using HT and HA had higher success rates than the default direct touch (DT) while standing (by about 9%) and walking (by about 12%), while being moderately slower. HT and HA were also faster than one of the best techniques, BezelCursor (BC) (by about 20% while standing and 6% while walking), while having the same success rate.
16
Body Follows Eye: Unobtrusive Posture Manipulation Through a Dynamic Content Position in Virtual Reality
Joon Gi Shin (Korea Advanced Institute of Science and Technology, Daejeon, Republic of Korea)Doheon Kim (Korea Advanced Institute of Science and Technology, Daejeon, Republic of Korea)Chaehan So (Yonsei University, Seoul, Republic of Korea)Daniel Saakes (Korea Advanced Institute of Science and Technology, Daejeon, Republic of Korea)
While virtual objects are likely to be a part of future interfaces, we lack knowledge of how the dynamic position of virtual objects influences users' posture. In this study, we investigated users' posture change following the unobtrusive and swift motions of a content window in virtual reality (VR). In two perception studies, we estimated the perception threshold on undetectable slow motions and displacement during an eye blink. In a formative study, we compared users' performance, posture change as well as subjective responses on unobtrusive, swift, and no motions. Based on the result, we designed concept applications and explored potential design space of moving virtual content for unobtrusive posture change. With our study, we discuss the interfaces that control users and the initial design guidelines of unobtrusive posture manipulation.
16
FaceHaptics: Robot Arm based Versatile Facial Haptics for Immersive Environments
Alexander Wilberz (Hochschule Bonn-Rhein-Sieg, Sankt Augustin, Germany)Dominik Leschtschow (Hochschule Bonn-Rhein-Sieg, Sankt Augustin, Germany)Christina Trepkowski (Hochschule Bonn-Rhein-Sieg, Sankt Augustin, Germany)Jens Maiero (Hochschule Bonn-Rhein-Sieg, Sankt Augustin, Germany)Ernst Kruijff (Hochschule Bonn-Rhein-Sieg, Sankt Augustin, Germany)Bernhard Riecke (Simon Fraser University, Vancouver, BC, Canada)
This paper introduces FaceHaptics, a novel haptic display based on a robot arm attached to a head-mounted virtual reality display. It provides localized, multi-directional and movable haptic cues in the form of wind, warmth, moving and single-point touch events and water spray to dedicated parts of the face not covered by the head-mounted display.The easily extensible system, however, can principally mount any type of compact haptic actuator or object. User study 1 showed that users appreciate the directional resolution of cues, and can judge wind direction well, especially when they move their head and wind direction is adjusted dynamically to compensate for head rotations. Study 2 showed that adding FaceHaptics cues to a VR walkthrough can significantly improve user experience, presence, and emotional responses.
15
Outline Pursuits: Gaze-assisted Selection of Occluded Objects in Virtual Reality
Ludwig Sidenmark (Lancaster University, Lancaster, United Kingdom)Christopher Clarke (Lancaster University, Lancaster, United Kingdom)Xuesong Zhang (Katholieke Universiteit Leuven, Leuven, Belgium)Jenny Phu (Ludwig Maximilian University of Munich, Munich, Germany)Hans Gellersen (Aarhus University, Aarhus, Denmark)
In 3D environments, objects can be difficult to select when they overlap, as this affects available target area and increases selection ambiguity. We introduce Outline Pursuits which extends a primary pointing modality for gaze-assisted selection of occluded objects. Candidate targets within a pointing cone are presented with an outline that is traversed by a moving stimulus. This affords completion of the selection by gaze attention to the intended target's outline motion, detected by matching the user's smooth pursuit eye movement. We demonstrate two techniques implemented based on the concept, one with a controller as the primary pointer, and one in which Outline Pursuits are combined with head pointing for hands-free selection. Compared with conventional raycasting, the techniques require less movement for selection as users do not need to reposition themselves for a better line of sight, and selection time and accuracy are less affected when targets become highly occluded.
15
CARoma Therapy: Pleasant Scents Promote Safer Driving, Better Mood, and Improved Well-Being in Angry Drivers
Dmitrijs Dmitrenko (University of Sussex, Brighton, United Kingdom)Emanuela Maggioni (University of Sussex, Brighton, United Kingdom)Giada Brianza (University of Sussex, Brighton, United Kingdom)Brittany E. Holthausen (Georgia Institute of Technology, Atlanta, GA, USA)Bruce N. Walker (Georgia Institute of Technology, Atlanta, GA, USA)Marianna Obrist (University of Sussex, Brighton, United Kingdom)
Driving is a task that is often affected by emotions. The effect of emotions on driving has been extensively studied. Anger is an emotion that dominates in such investigations. Despite the knowledge on strong links between scents and emotions, few studies have explored the effect of olfactory stimulation in a context of driving. Such an outcome provides HCI practitioners very little knowledge on how to design for emotions using olfactory stimulation in the car. We carried out three studies to select scents of different valence and arousal levels (i.e. rose, peppermint, and civet) and anger eliciting stimuli (i.e. affective pictures and on-road events). We used this knowledge to conduct the fourth user study investigating how the selected scents change the emotional state, well-being, and driving behaviour of drivers in an induced angry state. Our findings enable better decisions on what scents to choose when designing interactions for angry drivers.
15
Kirigami Haptic Swatches: Design Methods for Cut-and-Fold Haptic Feedback Mechanisms
Zekun Chang (University of Tokyo, Tokyo, Japan)Tung D. Ta (University of Tokyo, Tokyo, Japan)Koya Narumi (University of Tokyo, Tokyo, Japan)Heeju Kim (University of Tokyo, Tokyo, Japan)Fuminori Okuya (University of Tokyo, Hongo, Japan)Dongchi Li (University of Tokyo, Tokyo, Japan)Kunihiro Kato (University of Tokyo, Tokyo, Japan)Jie Qi (University of Tokyo, Tokyo, Japan)Yoshinobu Miyamoto (Aichi Institute of Technology, Toyota, Japan)Kazuya Saito (Kyushu University, Minami-Ku, Japan)Yoshihiro Kawahara (University of Tokyo, Tokyo, Japan)
Kirigami Haptic Swatches demonstrate how kirigami and origami based structures enable sophisticated haptic feedback through simple cut-and-fold fabrication techniques. We leverage four types of geometric patterns: rotational erection system (RES), split-fold waterbomb (SFWB), the overlaid structure of SFWB and RES (SFWB+RES), and cylindrical origami, to render different sets of haptic feedback (i.e. linear, bistable, bouncing snap-through, and rotational force behaviors, respectively). In each structure, not only the form factor but also the force feedback properties can be tuned through geometric parameters. We experimentally analyzed and modeled the structures, and implemented software to automatically generate 2D patterns for desired haptic properties. We also demonstrate five example applications including an assistive custom keyboard, rotational switch, multi-sensory toy, task checklist, and phone accessories. We believe the Kirigami Haptic Swatches helps tinkerers, designers, and even researchers to create interactions that enrich our haptic experience.
15
Celebrating Everyday Success: Improving Engagement and Motivation using a System for Recording Daily Highlights
Daniel Avrahami (FXPAL, Palo Alto, CA, USA)Kristin Williams (Carnegie Mellon University, Pittsburgh, PA, USA)Matthew L. Lee (FXPAL, Palo Alto, CA, USA)Nami Tokunaga (Fuji Xerox, Yokohama, Japan)Yulius Tjahjadi (FXPAL, Palo Alto, CA, USA)Jennifer Marlow (FXPAL, Palo Alto, CA, USA)
The demands of daily work offer few opportunities for workers to take stock of their own progress, big or small, which can lead to lower motivation, engagement, and higher risk of burnout. We present Highlight Matome, a personal online tool that encourages workers to quickly record and rank a single work highlight each day, helping them gain awareness of their own successes. We describe results from a field experiment investigating our tool's effectiveness for improving workers' engagement, perceptions, and affect. Thirty-three knowledge workers in Japan and the U.S. used Highlight Matome for six weeks. Our results show that using our tool for less than one minute each day significantly increased measures of work engagement, dedication, and positivity. A qualitative analysis of the highlights offers a window into participants' emotions and perceptions. We discuss implications for theories of inner work life and worker well-being.
14
Mouillé: Exploring Wetness Illusion on Fingertips to Enhance Immersive Experience in VR
Teng Han (Institute of Software, Chinese Academy of Sciences & University of Chinese Academy of Sciences, Beijing, China)Sirui Wang (Institute of Software, Chinese Academy of Sciences, Beijing, China)Sijia Wang (Carnegie Mellon University, Pittsburgh, PA, USA)Xiangmin Fan (Institute of Software, Chinese Academy of Sciences & University of Chinese Academy of Sciences, Beijing, China)Jie Liu (Institute of Software, Chinese Academy of Sciences & University of Chinese Academy of Sciences, Beijing, China)Feng Tian (Institute of Software, Chinese Academy of Sciences & University of Chinese Academy of Sciences, Beijing, China)Mingming Fan (Rochester Institute of Technology, Rochester, NY, USA)
Providing users with rich sensations is beneficial to enhance their immersion in Virtual Reality (VR) environments. Wetness is one such imperative sensation that affects users' sense of comfort and helps users adjust grip force when interacting with objects. Researchers have recently begun to explore ways to create wetness illusions, primarily on a user's face or body skin. In this work, we extended this line of research by creating wetness illusion on users' fingertips. We first conducted a user study to understand the effect of thermal and tactile feedback on users' perceived wetness sensation. Informed by the findings, we designed and evaluated a prototype---Mouillé---that provides various levels of wetness illusions on fingertips for both hard and soft items when users squeeze, lift, or scratch it. Study results indicated that users were able to feel wetness with different levels of temperature changes and they were able to distinguish three levels of wetness for simulated VR objects. We further presented applications that simulated an ice cube, an iced cola bottle, and a wet sponge, etc, to demonstrate its use in VR.
14
Designing for Social Interaction in the Age of Excessive Smartphone Use
Hüseyin Uğur Genç (Koç University, Istanbul, Turkey)Aykut Coşkun (Koç University, Istanbul, Turkey)
Excessive smartphone use has negative effects on our social relations as well as on our mental and psychological health. Most of the previous work to avoid these negative effects is based on a top-down approach such as restricting or limiting users' use of smartphones. Diverging from previous work, we followed a bottom-up approach to understand the practice of smartphone use in public settings from the users' perspective. We conducted observations in four coffeehouses, six focus group sessions with 46 participants and three design workshops with 15 designers. We identified five themes that help better understand smartphone use behavior in public settings and four alternative design approaches to mediate this behavior, namely enlighteners, preventers, supporters, and compliers. We discuss the implications of these themes and approaches for designing future interactive technologies aimed at mediating excessive smartphone use behavior.
14
See, Feel, Move: Player Behaviour Analysis through Combined Visualization of Gaze, Emotions, and Movement
Daniel Kepplinger (University of Applied Sciences Upper Austria, Hagenberg, Austria)Günter Wallner (Eindhoven University of Technology, Eindhoven, Netherlands)Simone Kriglstein (University of Vienna & AIT Austrian Institute of Technology GmbH, Vienna, Austria)Michael Lankes (University of Applied Sciences Upper Austria, Hagenberg, Austria)
Playtesting of games often relies on a mixed-methods approach to obtain more holistic insights about and, in turn, improve the player experience. However, triangulating the different data sources and visualizing them in an integrated manner such that they contextualize each other still proves challenging. Despite its potential value for gauging player behaviour, this area of research continues to be underexplored. In this paper, we propose a visualization approach that combines commonly tracked movement data with - from a visualization perspective rarely considered - gaze behaviour and emotional responses. We evaluated our approach through a qualitative expert study with five professional game developers. Our results show that both the individual visualization of gaze, emotions, and movement but especially their combination are valuable to understand and form hypotheses about player behaviour. At the same time, our results stress that careful attention needs to be paid to ensure that the visualization remains legible and does not obfuscate information.
14
KirigamiTable: Designing for Proxemic Transitions with a Shape-Changing Tabletop
Jens Emil Grønbæk (Aarhus University, Aarhus, Denmark)Majken Kirkegaard Rasmussen (Aarhus University, Aarhus, Denmark)Kim Halskov (Aarhus University, Aarhus N, Denmark)Marianne Graves Petersen (Aarhus University, Aarhus, Denmark)
A core challenge in tabletop research is to support transitions between individual activities and team work. Shape-changing tabletops open up new opportunities for addressing this challenge. However, interaction design for shape-changing furniture is in its early stages - so far, research has mainly focused on triggering shape-changes, and less on the actual interface transitions. We present KirigamiTable - a novel actuated shape-changing tabletop for supporting transitions in collaborative work. Our work builds on the concept of Proxemic Transitions, considering the dynamic interplay between social interactions, interactive technologies and furniture. With KirigamiTable, we demonstrate the potential of interactions for proxemic transitions that combine transformation of shape and digital contents. We highlight challenges for shape-changing tabletops: initiating shape and content transformations, cooperative control, and anticipating shape-change. To address these challenges, we propose a set of novel interaction techniques, including shape-first and content-first interaction, cooperative gestures, and physical and digital preview of shape-changes.
14
Better Because It's New: The Impact of Perceived Novelty on the Added Value of Mid-Air Haptic Feedback
Isa Rutten (Katholieke Universiteit Leuven, Leuven, Belgium)David Geerts (Katholieke Universiteit Leuven, Leuven, Belgium)
Mid-air haptic (MAH) feedback, providing touch feedback through ultrasound, has been considered an attractive substitute for the absence of physical touch during gesture-based interaction. Although the impact of MAH feedback on workload has already received some attention, the impact on other qualities of the user experience, including general attractiveness and experienced pleasure have been less investigated. In this preregistered study, involving 32 participants, we observed an added value of MAH feedback, on top of visual feedback, by increasing the attractiveness and experienced pleasure during gesture-based interaction, but not by decreasing workload. The added value regarding pleasure and attractiveness disappeared however after statistically controlling for perceived novelty. This paper highlights the importance of statistically controlling for novelty when testing the user experience of new technology during first-time use.
14
Evaluating a Personalizable, Inconspicuous Vibrotactile(PIV) Breathing Pacer for In-the-Moment Affect Regulation
Pardis Miri (Stanford University, Palo Alto, CA, USA)Emily Jusuf (Stanford University, Palo Alto, CA, USA)Andero Uusberg (Stanford University, Stanford, CA, USA)Horia Margarit (Stanford University, Palo Alto, CA, USA)Robert Flory (Intel, Hillsboro, OR, USA)Katherine Isbister (University of California, Santa Cruz, Santa Cruz, CA, USA)Keith Marzullo (University of Maryland, College Park, MD, USA)James J. Gross (Stanford University, Stanford, CA, USA)
Given the prevalence and adverse impact of anxiety, there is considerable interest in using technology to regulate anxiety. Evaluating the efficacy of such technology in terms of both the average effect (the intervention efficacy) and the heterogeneous effect (for whom and in what context the intervention was effective) is of paramount importance. In this paper, we demonstrate the efficacy of PIV, a personalized breathing pacer, in reducing anxiety in the presence of a cognitive stressor. We also quantify the relation between our specific stressor and PIV-user engagement. To our knowledge, this is the first mixed-design study of a vibrotactile affect regulation technology which accounts for a specific stressor and for individual differences in relation to the technology's efficacy. Guidelines in this paper can be applied for designing and evaluating other affect regulation technologies.
14
Isness: Using Multi-Person VR to Design Peak Mystical Type Experiences Comparable to Psychedelics
David R. Glowacki (University of Bristol & ArtSci International, Bristol, United Kingdom)Mark D. Wonnacott (University of Bristol, Bristol, United Kingdom)Rachel Freire (Rachel Freire Studio & University of Bristol, London, United Kingdom)Becca R. Glowacki (Goldsmiths, University of London, London, United Kingdom)Ella M. Gale (University of Bristol, Bristol, United Kingdom)James E. Pike (ArtSci International, Bristol, United Kingdom)Tiu de Haan (ArtSci International, Bristol, United Kingdom)Mike Chatziapostolou (ArtSci International, Bristol, United Kingdom)Oussama Metatla (University of Bristol, Bristol, United Kingdom)
Studies combining psychotherapy with psychedelic drugs (ΨDs) have demonstrated positive outcomes that are often associated with ΨDs' ability to induce 'mystical-type' experiences (MTEs) – i.e., subjective experiences whose characteristics include a sense of connectedness, transcendence, and ineffability. We suggest that both PsiDs and virtual reality can be situated on a broader spectrum of psychedelic technologies. To test this hypothesis, we used concepts, methods, and analysis strategies from ΨD research to design and evaluate 'Isness', a multi-person VR journey where participants experience the collective emergence, fluctuation, and dissipation of their bodies as energetic essences. A study (N=57) analyzing participant responses to a commonly used ΨD experience questionnaire (MEQ30) indicates that Isness participants reported MTEs comparable to those reported in double-blind clinical studies after high doses of psilocybin and LSD. Within a supportive setting and conceptual framework, VR phenomenology can create the conditions for MTEs from which participants derive insight and meaning.
13
Bot in the Bunch: Facilitating Discussion in Group Chat by Improving Efficiency and Participation with a Chatbot
Soomin Kim (Seoul National University, Seoul, Republic of Korea)Jinsu Eun (Seoul National University, Seoul, Republic of Korea)Changhoon Oh (Carnegie Mellon University, Pittsburgh, PA, USA)Bongwon Suh (Seoul National University, Seoul, Republic of Korea)Joonhwan Lee (Seoul National University, Seoul, Republic of Korea)
Although group chat discussions are prevalent in daily life, they have a number of limitations. When discussing in a group chat, reaching a consensus often takes time, members contribute unevenly to the discussion, and messages are unorganized. Hence, we aimed to explore the feasibility of a facilitator chatbot agent to improve group chat discussions. We conducted a needfinding survey to identify key features for a facilitator chatbot. We then implemented GroupfeedBot, a chatbot agent that could facilitate group discussions by managing the discussion time, encouraging members to participate evenly, and organizing members' opinions. To evaluate GroupfeedBot, we performed preliminary user studies that varied for diverse tasks and different group sizes. We found that the group with GroupfeedBot appeared to exhibit more diversity in opinions even though there were no differences in output quality and message quantity. On the other hand, GroupfeedBot promoted members' even participation and effective communication for the medium-sized group.
13
GazeConduits: Calibration-Free Cross-Device Collaboration through Gaze and Touch
Simon Voelker (RWTH Aachen University, Aachen, Germany)Sebastian Hueber (RWTH Aachen University, Aachen, Germany)Christian Holz (ETH Zürich, Zürich, Switzerland)Christian Remy (Aarhus University, Aarhus, Denmark)Nicolai Marquardt (University College London, London, United Kingdom)
We present GazeConduits, a calibration-free ad-hoc mobile interaction concept that enables users to collaboratively interact with tablets, other users, and content in a cross-device setting using gaze and touch input. GazeConduits leverages recently introduced smartphone capabilities to detect facial features and estimate users' gaze directions. To join a collaborative setting, users place one or more tablets onto a shared table and position their phone in the center, which then tracks users present as well as their gaze direction to determine the tablets they look at. We present a series of techniques using GazeConduits for collaborative interaction across mobile devices for content selection and manipulation. Our evaluation with 20 simultaneous tablets on a table shows that GazeConduits can reliably identify which tablet or collaborator a user is looking at.
13
"I Hear You, I Feel You": Encouraging Deep Self-disclosure through a Chatbot
Yi-Chieh Lee (University of Illinois at Urbana-Champaign & NTT Japan, Champaign, IL, USA)Naomi Yamashita (NTT Japan, Keihanna, Japan)Yun Huang (University of Illinois at Urbana-Champaign, Champaign, IL, USA)Wai Fu (University of Illinois at Urbana-Champaign, Champaign, IL, USA)
Chatbots have great potential to serve as a low-cost, effective tool to support people's self-disclosure. Prior work has shown that reciprocity occurs in human-machine dialog; however, whether reciprocity can be leveraged to promote and sustain deep self-disclosure over time has not been systematically studied. In this work, we design, implement and evaluate a chatbot that has self-disclosure features when it performs small talk with people. We ran a study with 47 participants and divided them into three groups to use different chatting styles of the chatbot for three weeks. We found that chatbot self-disclosure had a reciprocal effect on promoting deeper participant self-disclosure that lasted over the study period, in which the other chat styles without self-disclosure features failed to deliver. Chatbot self-disclosure also had a positive effect on improving participants' perceived intimacy and enjoyment over the study period. Finally, we reflect on the design implications of chatbots where deep self-disclosure is needed over time.
13
Framing Effects Influence Interface Feature Decisions
Andy Cockburn (University of Canterbury, Christchurch, New Zealand)Blaine Lewis (University of Toronto, Toronto, ON, Canada)Philip Quinn (Google, Mountain View, CA, USA)Carl Gutwin (University of Saskatchewan, Saskatoon, SK, Canada)
Studies in psychology have shown that framing effects, where the positive or negative attributes of logically equivalent choices are emphasised, influence people's decisions. When outcomes are uncertain, framing effects also induce patterns of choice reversal, where decisions tend to be risk averse when gains are emphasised and risk seeking when losses are emphasised. Studies of these effects typically use potent framing stimuli, such as the mortality of people suffering from diseases or personal financial standing. We examine whether these effects arise in users' decisions about interface features, which typically have less visceral consequences, using a crowd-sourced study based on snap-to-grid drag-and-drop tasks (n = 842). The study examined several framing conditions: those similar to prior psychological research, and those similar to typical interaction choices (enabling/disabling features). Results indicate that attribute framing strongly influences users' decisions, that these decisions conform to patterns of risk seeking for losses, and that patterns of choice reversal occur.
12
Miniature Haptics: Experiencing Haptic Feedback through Hand-based and Embodied Avatars
Bo-Xiang Wang (National Taiwan University, Taipei, Taiwan Roc)Yu-Wei Wang (National Taiwan University, Taipei, Taiwan Roc)Yen-Kai Chen (National Taiwan University, Taipei, Taiwan Roc)Chun-Miao Tseng (National Taiwan University, Taipei, Taiwan Roc)Min-Chien Hsu (National Taiwan University, Taipei, Taiwan Roc)Cheng An Hsieh (National Taiwan University, Taipei, Taiwan Roc)Hsin-Ying Lee (National Taiwan University, Taipei, Taiwan Roc)Mike Y. Chen (National Taiwan University, Taipei, Taiwan Roc)
We present Miniature Haptics, a new approach to providing realistic haptic experiences by applying miniaturized haptic feedback to hand-based, embodied avatars. By shrinking haptics to a much smaller scale, Miniature Haptics enables the exploration of new haptic experiences that are not practical to create at the full, human-body scale. Using Finger Walking in Place (FWIP) as an example avatar embodiment and control method, we first explored the feasibility of Miniature Haptics then conducted a human factors study to understand how people map their full-body skeletal model to their hands. To understand the user experience of Miniature Haptic, we developed a miniature football haptic display, and results from our user study show that Miniature Haptics significantly improved the realism and enjoyment of the experience and is preferred by users (p < 0.05). In addition, we present two miniature motion platforms supporting the haptic experiences of: 1) rapidly changing ground height for platform jumping games such as Super Mario Bros and 2) changing terrain slope. Overall, Miniature Haptics makes it possible to explore novel haptic experiences that have not been practical before.
12
JumpVR: Jump-Based Locomotion Augmentation for Virtual Reality
Dennis Wolf (Ulm University, Ulm, Germany)Katja Rogers (Ulm University, Ulm, Germany)Christoph Kunder (Ulm University, Ulm, Germany)Enrico Rukzio (Ulm University, Ulm, Germany)
One of the great benefits of virtual reality (VR) is the implementation of features that go beyond realism. Common "unrealistic" locomotion techniques (like teleportation) can avoid spatial limitation of tracking, but minimize potential benefits of more realistic techniques (e.g. walking). As an alternative that combines realistic physical movement with hyper-realistic virtual outcome, we present JumpVR, a jump-based locomotion augmentation technique that virtually scales users' physical jumps. In a user study (N=28), we show that jumping in VR (regardless of scaling) can significantly increase presence, motivation and immersion compared to teleportation, while largely not increasing simulator sickness. Further, participants reported higher immersion and motivation for most scaled jumping variants than forward-jumping. Our work shows the feasibility and benefits of jumping in VR and explores suitable parameters for its hyper-realistic scaling. We discuss design implications for VR experiences and research.
12
Comparing Smartphone Speech Recognition and Touchscreen Typing for Composition and Transcription
Margaret Foley (University of Waterloo, Waterloo, ON, Canada)Géry Casiez (Université de Lille, University of Waterloo, Inria, Institut Universitaire de France, Villeneuve d'Ascq, France)Daniel Vogel (University of Waterloo, Waterloo, ON, Canada)
Ruan et al. found transcribing short phrases with speech recognition nearly 200% faster than typing on a smartphone. We extend this comparison to a novel composition task, using a protocol that enables a controlled comparison with transcription. Results show that both composing and transcribing with speech is faster than typing. But, the magnitude of this difference is lower with composition, and speech has a lower error rate than keyboard during composition, but not during transcription. When transcribing, speech outperformed typing in most NASA-TLX measures, but when composing, there were no significant differences between typing and speech for any measure except physical demand.
12
Wearable Microphone Jamming
Yuxin Chen (University of Chicago, Chicago, IL, USA)Huiying Li (University of Chicago, Chicago, IL, USA)Shan-Yuan Teng (University of Chicago, Chicago, IL, USA)Steven Nagels (University of Chicago, Chicago, IL, USA)Zhijing Li (University of Chicago, Chicago, IL, USA)Pedro Lopes (University of Chicago, Chicago, IL, USA)Ben Y. Zhao (University of Chicago, Chicago, IL, USA)Haitao Zheng (University of Chicago, Chicago, IL, USA)
We engineered a wearable microphone jammer that is capable of disabling microphones in its user's surroundings, including hidden microphones. Our device is based on a recent exploit that leverages the fact that when exposed to ultrasonic noise, commodity microphones will leak the noise into the audible range.<br>Unfortunately, ultrasonic jammers are built from multiple transducers and therefore exhibit blind spots, i.e., locations in which transducers destructively interfere and where a microphone cannot be jammed. To solve this, our device exploits a synergy between ultrasonic jamming and the naturally occur- ring movements that users induce on their wearable devices (e.g., bracelets) as they gesture or walk. We demonstrate that these movements can blur jamming blind spots and increase jamming coverage. Moreover, current jammers are also directional, requiring users to point the jammer to a microphone; instead, our wearable bracelet is built in a ring-layout that al- lows it to jam in multiple directions. This is beneficial in that it allows our jammer to protect against microphones hidden out of sight.<br>We evaluated our jammer in a series of experiments and found that: (1) it jams in all directions, e.g., our device jams over 87% of the words uttered around it in any direction, while existing devices jam only 30% when not pointed directly at the microphone; (2) it exhibits significantly less blind spots; and, (3) our device induced a feeling of privacy to participants of our user study. We believe our wearable provides stronger privacy in a world in which most devices are constantly eavesdropping on our conversations.
12
It Is Your Turn: Collaborative Ideation With a Co-Creative Robot through Sketch
Yuyu Lin (Zhejiang University, Hangzhou, Zhejiang, China)Jiahao Guo (Zhejiang University, Hangzhou, Zhejiang, China)Yang Chen (Zhejiang University, Hangzhou, Zhejiang, China)Cheng Yao (Zhejiang University, Hangzhou, Zhejiang, China)Fangtian Ying (China Academy of Art, Hangzhou, Zhejiang, China)
Co-creative systems have been widely explored in the field of computational creativity. However, existing AI partners of these systems are mostly virtual agents. As sketching on paper with embodied robots could be more engaging for designers' early-stage ideation and collaborative practices, we envision the possibility of Cobbie, a mobile robot that ideates iteratively with designers by generating creative and diverse sketches. To evaluate the differences in co-creativity and user experience between the co-creative robots and virtual agents, we conducted a comparative experiment and analyzed the data collected from quantitative scales, observation, and semi-structured interview. The results reveal that Cobbie is more satisfying in motivating exploration, provoking unexpected ideas and engaging designers in the collaborative ideation process. Based on these findings, we discussed the prospects of co-creative robots for future developments of human-AI collaborative systems.
12
Music Creation by Example
Emma Frid (KTH Royal Institute of Technology, Stockholm, Sweden)Celso Gomes (Adobe Research, Seattle, WA, USA)Zeyu Jin (Adobe Research, San Francisco, CA, USA)
Short online videos have become the dominating media on social platforms. However, finding suitable music to accompany videos can be a challenging task to some video creators, due to copyright constraints, limitations in search engines, and required audio-editing expertise. One possible solution to these problems is to use AI music generation. In this paper we present a user interface (UI) paradigm that allows users to input a song to an AI engine and then interactively regenerate and mix AI-generated music. To arrive at this design, we conducted user studies with a total of 104 video creators at several stages of our design and development process. User studies supported the effectiveness of our approach and provided valuable insights about human-AI interaction as well as the design and evaluation of mixed-initiative interfaces in creative practice.
12
Novice-AI Music Co-Creation via AI-Steering Tools for Deep Generative Models
Ryan Louie (Northwestern University, Evanston, IL, USA)Andy Coenen (Google Research, Mountain View, CA, USA)Cheng Zhi Huang (Independent Researcher, Mountain View, CA, USA)Michael Terry (Google Research, Cambridge, MA, USA)Carrie J. Cai (Google Research, Mountain View, CA, USA)
While generative deep neural networks (DNNs) have demonstrated their capacity for creating novel musical compositions, less attention has been paid to the challenges and potential of co-creating with these musical AIs, especially for novices. In a needfinding study with a widely used, interactive musical AI, we found that the AI can overwhelm users with the amount of musical content it generates, and frustrate them with its non-deterministic output. To better match co-creation needs, we developed AI-steering tools, consisting of Voice Lanes that restrict content generation to particular voices; Example-Based Sliders to control the similarity of generated content to an existing example; Semantic Sliders to nudge music generation in high-level directions (happy/sad, conventional/surprising); and Multiple Alternatives of generated content to audition and choose from. In a summative study (N=21), we discovered the tools not only increased users' trust, control, comprehension, and sense of collaboration with the AI, but also contributed to a greater sense of self-efficacy and ownership of the composition relative to the AI.
12
In Helping a Vulnerable Bot, You Help Yourself: Designing a Social Bot as a Care-Receiver to Promote Mental Health and Reduce Stigma
Taewan Kim (Seoul National University, Seoul, Republic of Korea)Mintra Ruensuk (Ulsan National Institute of Science and Technology, Ulsan, Republic of Korea)Hwajung Hong (Seoul National University, Seoul, Republic of Korea)
Helping others can have a positive effect on both the giver and the receiver. However, supporting someone with depression can be complicated and overwhelming. To address this, we proposed a Facebook-based social bot displaying depressive symptoms and disclosing vulnerable experiences that allows users to practice providing reactions online. We investigated how 55 college students interacted with the social bot for three weeks and how these support-giving experiences affected their mental health and stigma. By responding to the bot, the participants reframed their own negative experiences, reported reduced feelings of danger regarding an individual with depression and increased willingness to help the person, and presented favorable attitudes toward seeking treatment for depression. We discuss design opportunities for accessible social bots that could help users to keep practicing peer support interventions without fear of negative consequences.
12
Designing and Evaluating 'In the Same Boat', A Game of Embodied Synchronization for Enhancing Social Play
Raquel Breejon Robinson (University of Saskatchewan, Saskatoon, SK, Canada)Elizabeth Reid (University of Saskatchewan, Saskatoon, SK, Canada)James Collin Fey (University of California, Santa Cruz, Santa Cruz, CA, USA)Ansgar E. Depping (University of Saskatchewan, Saskatoon, SK, Canada)Katherine Isbister (University of California, Santa Cruz, Santa Cruz, CA, USA)Regan L. Mandryk (University of Saskatchewan, Saskatoon, SK, Canada)
Social closeness is important for health and well-being, but is difficult to maintain over a distance. Games can help connect people by strengthening existing relationships or creating new ones through shared playful experiences. We present the design and evaluation of 'In the Same Boat' (ITSB), a two-player infinite runner designed to foster social closeness in distributed dyads. ITSB leverages the synchronization of both players' input to steer a canoe down a river and avoid obstacles. We created two versions: embodied controls, which use players' physiological signals (breath rate, facial expressions), and standard keyboard controls. Results from a study with 35 dyads indicate that ITSB fostered affiliation, and while embodied controls were less intuitive, people enjoyed them more. Further, photos of the dyads were rated as happier and closer in the embodied condition, indicating the potential of embodied controls to foster social closeness in synchronized play over a distance.
11
Improving Virtual Reality Ergonomics Through Reach-Bounded Non-Linear Input Amplification
Johann Wentzel (University of Waterloo, Waterloo, ON, Canada)Greg d'Eon (Universiy of British Columbia, Vancouver, BC, Canada)Daniel Vogel (University of Waterloo, Waterloo, ON, Canada)
Input amplification enables easier movement in virtual reality (VR) for users with mobility issues or in confined spaces. However, current techniques either do not focus on maintaining feelings of body ownership, or are not applicable to general VR tasks. We investigate a general purpose non-linear transfer function that keeps the user's reach within reasonable bounds to maintain body ownership. The technique amplifies smaller movements from a user-definable neutral point into the expected larger movements using a configurable Hermite curve. Two experiments evaluate the approach. The first establishes that the technique has comparable performance to the state-of-the-art, increasing physical comfort while maintaining task performance and body ownership. The second explores the characteristics of the technique over a wide range of amplification levels. Using the combined results, design and implementation recommendations are provided with potential applications to related VR transfer functions.
11
Understanding Walking Meetings: Drivers and Barriers
Ida Damen (Eindhoven University of Technology, Eindhoven, Noord Brabant, Netherlands)Carine Lallemand (Eindhoven University of Technology & University of Luxembourg, Eindhoven, Noord Brabant, Netherlands)Rens Brankaert (Eindhoven University of Technology & Fontys University of Applied Sciences, Eindhoven, Noord Brabant, Netherlands)Aarnout Brombacher (Eindhoven University of Technology, Eindhoven, Noord Brabant, Netherlands)Pieter van Wesemael (Eindhoven University of Technology, Eindhoven, Noord Brabant, Netherlands)Steven Vos (Eindhoven University of Technology & Fontys University of Applied Sciences, Eindhoven, Noord Brabant, Netherlands)
There is increased interest in reducing sedentary behavior of office workers to combat the negative health effects of prolonged sitting. Walking meetings offer a promising solution to this problem as they facilitate a physically active way of working. To inform future development of technologies supporting these type of meetings, in-depth qualitative insights into people's experiences of walking meetings are needed. We conducted semi-structured walking interviews (N=16) to identify key drivers and barriers for walking meetings in a living lab setting by using the 'WorkWalk'. The 'WorkWalk' is a 1.8 km walking route indicated by a dotted blue line with outdoor meeting points, integrated into the room booking system. Our findings provide insights into how walking meetings are experienced and affect the set-up and social dynamics of meetings. We offer design recommendations for the development of future technologies and service design elements to support walking meetings and active ways of working.
11
Jubilee: An Extensible Machine for Multi-tool Fabrication
Joshua Vasquez (University of Washington, Seattle, WA, USA)Hannah Twigg-Smith (University of Washington, Seattle, WA, USA)Jasper Tran O'Leary (University of Washington, Seattle, WA, USA)Nadya Peek (University of Washington, Seattle, WA, USA)
We present Jubilee, an open-source hardware machine with automatic tool-changing and interchangeable bed plates. As digital fabrication tools have become more broadly accessible, tailoring those machines to new users and novel workflows has become central to HCI research. However, the lack of hardware infrastructure makes custom application development cumbersome. We identify a need for an extensible platform to allow HCI researchers to develop workflows for fabrication, material exploration, and other applications. Jubilee addresses this need. It can automatically and repeatably change tools in the same operation. It can be built with a combination of simple 3D-printed and readily available parts. It has several standard head designs for a variety of applications including 3D printing, syringe-based liquid handling, imaging, and plotting. We present Jubilee with a comprehensive set of assembly instructions and kinematic mount templates for user-designed tools and bed plates. Finally we demonstrate Jubilee's multi-tool workflow functionality with a series of example applications.
11
A Literature Review of Quantitative Persona Creation
Joni Salminen (Qatar Computing Research Institute, Hamad Bin Khalifa University & University of Turku, Doha, Qatar)Kathleen Guan (Georgetown University, Washington, DC, USA)Soon-Gyo Jung (Qatar Computing Research Institute, Hamad Bin Khalifa University, Doha, Qatar)Shammur A. Chowdhury (Qatar Computing Research Institute, Hamad Bin Khalifa University, Doha, Qatar)Bernard J. Jansen (Qatar Computing Research Institute, Hamad Bin Khalifa University, Doha, Qatar)
Quantitative persona creation (QPC) has tremendous potential, as HCI researchers and practitioners can leverage user data from online analytics and digital media platforms to better understand their users and customers. However, there is a lack of a systematic overview of the QPC methods and progress made, with no standard methodology or known best practices. To address this gap, we review 49 QPC research articles from 2005 to 2019. Results indicate three stages of QPC research: Emergence, Diversification, and Sophistication. Sharing resources, such as datasets, code, and algorithms, is crucial to achieving the next stage (Maturity). For practitioners, we provide guiding questions for assessing QPC readiness in organizations.
11
Me vs. Super(wo)man: Effects of Customization and Identification in a VR Exergame
Jordan Koulouris (University of Bath, Bath, Somerset, United Kingdom)Zoe Jeffery (University of Bath, Bath, Somerset, United Kingdom)James Best (University of Bath, Bath, Somerset, United Kingdom)Eamonn O'Neill (University of Bath, Bath, Somerset, United Kingdom)Christof Lutteroth (University of Bath, Bath, Somerset, United Kingdom)
Customised avatars are a powerful tool to increase identification, engagement and intrinsic motivation in digital games. We investigated the effects of customisation in a self-competitive VR exergame by modelling players and their previous performance in the game with customised avatars. In a first study we found that, similar to non-exertion games, customisation significantly increased identification and intrinsic motivation, as well as physical performance in the exergame. In a second study we identified a more complex relationship with the customisation style: idealised avatars increased wishful identification but decreased exergame performance compared to realistic avatars. In a third study, we found that 'enhancing' realistic avatars with idealised characteristics increased wishful identification, but did not have any adverse effects. We discuss the findings based on feedforward and self-determination theory, proposing notions of intrinsic identification (fostering a sense of self) and extrinsic identification (drawing away from the self) to explain the results.
11
Nailz: Sensing Hand Input with Touch Sensitive Nails
DoYoung Lee (Ulsan National Institute of Science and Technology, Ulsan, Republic of Korea)SooHwan Lee (Ulsan National Institute of Science and Technology, Ulsan, Republic of Korea)Ian Oakley (Ulsan National Institute of Science and Technology, Ulsan, Republic of Korea)
Touches between the fingers of an unencumbered hand represent a ready-to-use, eyes-free and expressive input space suitable for interacting with wearable devices such as smart glasses or watches. While prior work has focused on touches to the inner surface of the hand, touches to the nails, a practical site for mounting sensing hardware, have been comparatively overlooked. We extend prior implementations of single touch sensing nails to a full set of five and explore their potential for wearable input. We present design ideas and an input space of 144 touches (taps, flicks and swipes) derived from an ideation workshop. We complement this with data from two studies characterizing the subjective comfort and objective characteristics (task time, accuracy) of each touch. We conclude by synthesizing this material into a set of 29 viable nail touches, assessing their performance in a final study and illustrating how they could be used by presenting, and qualitatively evaluating, two example applications.
11
On-Face Olfactory Interfaces
Yanan Wang (Zhejiang University, Hangzhou, Zhejiang, China)Judith Amores Fernandez (Massachusetts Institute of Technology, Cambridge, MA, USA)Pattie Maes (Massachusetts Institute of Technology, Cambridge, MA, USA)
On-face wearables are currently limited to piercings, tattoos, or interactive makeup that aesthetically enhances the user, and have been minimally used for scent-delivery methods. However, on-face scent interfaces could provide an advantage for personal scent delivery in comparison with other modalities or body locations since they are closer to the nose. In this paper, we present the mechanical and industrial design details of a series of form factors for on-face olfactory wearables that are lightweight and can be adhered to the skin or attached to glasses or piercings. We assessed the usability of three prototypes by testing with 12 participants in a within-subject study design while they were interacting in pairs at a close personal distance. We compare two of these designs with an "off-face" olfactory necklace and evaluate their social acceptance, comfort as well as perceived odor intensity for both the wearer and observer.
11
Too Hot to Handle: An Evaluation of the Effect of Thermal Visual Representation on User Grasping Interaction in Virtual Reality
Andreea Dalia Blaga (Birmingham City University, Birmingham, United Kingdom)Maite Frutos-Pascual (Birmingham City University, Birmingham, United Kingdom)Chris Creed (Birmingham City University, Birmingham, United Kingdom)Ian Williams (Birmingham City University, Birmingham West Midlands, United Kingdom)
Influence of interaction fidelity and rendering quality on perceived user experience have been largely explored in Virtual Reality (VR). However, differences in interaction choices triggered by these rendering cues have not yet been explored. We present a study analysing the effect of thermal visual cues and contextual information on 50 participants' approach to grasp and move a virtual mug. This study comprises 3 different temperature cues (baseline empty, hot and cold) and 4 contextual representations; all embedded in a VR scenario. We evaluate 2 different hand representations (abstract and human) to assess grasp metrics. Results show temperature cues influenced grasp location, with the mug handle being predominantly grasped with a smaller grasp aperture for the hot condition, while the body and top were preferred for baseline and cold conditions.
11
Zippro: The Design and Implementation of An Interactive Zipper
Pin-Sung Ku (Dartmouth College & National Taiwan University, Hanover, NH, USA)Jun Gong (Dartmouth College, Hanover, NH, USA)Te-Yen Wu (Dartmouth College, Hanover, NH, USA)Yixin Wei (Dartmouth College & Beijing University of Posts and Telecommunications, Hanover, NH, USA)Yiwen Tang (Dartmouth College & Carnegie Mellon University, Hanover, NH, USA)Barrett Ens (Monash University, Melbourne, VIC, Australia)Xing-Dong Yang (Dartmouth College, Hanover, NH, USA)
Zippers are common in a wide variety of objects that we use daily. This work investigates how we can take advantage of such common daily activities to support seamless interaction with technology. We look beyond simple zipper-sliding interactions explored previously to determine how to weave foreground and background interactions into a vocabulary of natural usage patterns. We begin by conducting two user studies to understand how people typically interact with zippers. The findings identify several opportunities for zipper input and sensing, which inform the design of Zippro, a self-contained prototype zipper slider, which we evaluate with a standard jacket zipper. We conclude by demonstrating several applications that make use of the identified foreground and background input methods.
11
Nudge for Deliberativeness: How Interface Features Influence Online Discourse
Sanju Menon (National University of Singapore, Singapore, Singapore)Weiyu Zhang (National University of Singapore, Singapore, Singapore)Simon T. Perrault (Singapore University of Technology and Design, Singapore, Singapore)
Cognitive load is a significant challenge to users for being deliberative. Interface design has been used to mitigate this cognitive state. This paper surveys literature on the anchoring effect, partitioning effect and point-of-choice effect, based on which we propose three interface nudges, namely, the word-count anchor, partitioning text fields, and reply choice prompt. We then conducted a 2×2×2 factorial experiment with 80 participants (10 for each condition), testing how these nudges affect deliberativeness. The results showed a significant positive impact of the word-count anchor. There was also a significant positive impact of the partitioning text fields on the word count of response. The reply choice prompt showed a surprisingly negative affect on the quantity of response, hinting at the possibility that the reply choice prompt induces a fear of evaluation, which could in turn dampen the willingness to reply.