注目の論文一覧

各カテゴリ上位30論文までを表示しています

The ACM CHI Conference on Human Factors in Computing Systems (https://chi2021.acm.org/)

20
MARVIS: Combining Mobile Devices and Augmented Reality for Visual Data Analysis
Ricardo Langner (Technische Universität Dresden, Dresden, Germany)Marc Satkowski (Technische Universität Dresden, Dresden, Germany)Wolfgang Büschel (Technische Universität Dresden, Dresden, Germany)Raimund Dachselt (Technische Universität Dresden, Dresden, Germany)
We present MARVIS, a conceptual framework that combines mobile devices and head-mounted Augmented Reality (AR) for visual data analysis. We propose novel concepts and techniques addressing visualization-specific challenges. By showing additional 2D and 3D information around and above displays, we extend their limited screen space. AR views between displays as well as linking and brushing are also supported, making relationships between separated visualizations plausible. We introduce the design process and rationale for our techniques. To validate MARVIS' concepts and show their versatility and widespread applicability, we describe six implemented example use cases. Finally, we discuss insights from expert hands-on reviews. As a result, we contribute to a better understanding of how the combination of one or more mobile devices with AR can benefit visual data analysis. By exploring this new type of visualization environment, we hope to provide a foundation and inspiration for future mobile data visualizations.
19
Towards an Understanding of Situated AR Visualization for Basketball Free-Throw Training
Tica Lin (Harvard University, Cambridge, Massachusetts, United States)Rishi Singh (Harvard University, Cambridge, Massachusetts, United States)Yalong Yang (Harvard University, Cambridge, Massachusetts, United States)Carolina Nobre (Harvard University, Cambridge, Massachusetts, United States)Johanna Beyer (Harvard University, Cambridge, Massachusetts, United States)Maurice Smith (Harvard University, Cambridge, Massachusetts, United States)Hanspeter Pfister (Harvard University, Cambridge, Massachusetts, United States)
We present an observational study to compare co-located and situated real-time visualizations in basketball free-throw training. Our goal is to understand the advantages and concerns of applying immersive visualization to real-world skill-based sports training and to provide insights for designing AR sports training systems. We design both a situated 3D visualization on a head-mounted display and a 2D visualization on a co-located display to provide immediate visual feedback on a player's shot performance. Using a within-subject study design with experienced basketball shooters, we characterize user goals, report on qualitative training experiences, and compare the quantitative training results. Our results show that real-time visual feedback helps athletes refine subsequent shots. Shooters in our study achieve greater angle consistency with our visual feedback. Furthermore, AR visualization promotes an increased focus on body form in athletes. Finally, we present suggestions for the design of future sports AR studies.
19
Touch&Fold: A Foldable Haptic Actuator for Rendering Touch in Mixed Reality
Shan-Yuan Teng (University of Chicago, Chicago, Illinois, United States)Pengyu Li (University of Chicago, Chicago, Illinois, United States)Romain Nith (University of Chicago, Chicago, Illinois, United States)Joshua Fonseca (University of Chicago, Chicago, Illinois, United States)Pedro Lopes (University of Chicago, Chicago, Illinois, United States)
We propose a nail-mounted foldable haptic device that provides tactile feedback to mixed reality (MR) environments by pressing against the user’s fingerpad when a user touches a virtual object. What is novel in our device is that it quickly tucks away when the user interacts with real-world objects. Its design allows it to fold back on top of the user’s nail when not in use, keeping the user’s fingerpad free to, for instance, manipulate handheld tools and other objects while in MR. To achieve this, we engineered a wireless and self-contained haptic device, which measures 24×24×41 mm and weighs 9.5 g. Furthermore, our foldable end-effector also features a linear resonant actuator, allowing it to render not only touch contacts (i.e., pressure) but also textures (i.e., vibrations). We demonstrate how our device renders contacts with MR surfaces, buttons, low- and high-frequency textures. In our first user study, we found that participants perceived our device to be more realistic than a previous haptic device that also leaves the fingerpad free (i.e., fingernail vibration). In our second user study, we investigated the participants’ experience while using our device in a real-world task that involved physical objects. We found that our device allowed participants to use the same finger to manipulate handheld tools, small objects, and even feel textures and liquids, without much hindrance to their dexterity, while feeling haptic feedback when touching MR interfaces.
18
Teardrop Glasses: Pseudo Tears Induce Sadness in You and Those Around You
Shigeo Yoshida (The University of Tokyo, Tokyo, Japan)Takuji Narumi (the University of Tokyo, Tokyo, Japan)Tomohiro Tanikawa (the University of Tokyo, Tokyo, Japan)Hideaki Kuzuoka (The University of Tokyo, Bunkyo-ku, Tokyo, Japan)Michitaka Hirose (The University of Tokyo, Tokyo, Japan)
Emotional contagion is a phenomenon in which one's emotions are transmitted among individuals unconsciously by observing others' emotional expressions. In this paper, we propose a method for mediating people's emotions by triggering emotional contagion through artificial bodily changes such as pseudo tears. We focused on shedding tears because of the link to several emotions besides sadness. In addition, it is expected that shedding tears would induce emotional contagion because it is observable by others. We designed an eyeglasses-style wearable device, Teardrop glasses, that release water drops near the wearer's eyes. The drops flow down the cheeks and emulate real tears. The study revealed that artificial crying with pseudo tears increased sadness among both wearers and those observing them. Moreover, artificial crying attenuated happiness and positive feelings in observers. Our findings show that actual bodily changes are not necessary for inducing emotional contagion as artificial bodily changes are also sufficient.
18
Data-Driven Mark Orientation for Trend Estimation in Scatterplots
Tingting Liu (School of Computer Science, Qingdao, Shandong, China)Xiaotong Li (School of Computer Science, Qingdao, Shandong, China)Chen Bao (Shandong University, Qingdao, Shandong, China)Michael Correll (Tableau Software, Seattle, Washington, United States)Changehe Tu (Shandong Univ., Qingdao, China)Oliver Deussen (University of Konstanz, Konstanz, Germany)Yunhai Wang (Shandong University, Qingdao, China)
A common task for scatterplots is communicating trends in bivariate data. However, the ability of people to visually estimate these trends is under-explored, especially when the data violate assumptions required for common statistical models, or visual trend estimates are in conflict with statistical ones. In such cases, designers may need to intervene and de-bias these estimations, or otherwise inform viewers about differences between statistical and visual trend estimations. We propose data-driven mark orientation as a solution in such cases, where the directionality of marks in the scatterplot guide participants when visual estimation is otherwise unclear or ambiguous. Through a set of laboratory studies, we investigate trend estimation across a variety of data distributions and mark directionalities, and find that data-driven mark orientation can help resolve ambiguities in visual trend estimates.
18
Proxemics and Social Interactions in an Instrumented Virtual Reality Workshop
Julie R.. Williamson (University of Glasgow, Glasgow, United Kingdom)Jie Li (Centrum Wiskunde & Informatica, Amsterdam, Netherlands)David A.. Shamma (Centrum Wiskunde & Informatica, Amsterdam, Netherlands)Vinoba Vinayagamoorthy (BBC Research & Development, London, United Kingdom)Pablo Cesar (CWI, Amsterdam, Netherlands)
Virtual environments (VEs) can create collaborative and social spaces, which are increasingly important in the face of remote work and travel reduction. Recent advances, such as more open and widely available platforms, create new possibilities to observe and analyse interaction in VEs. Using a custom instrumented build of Mozilla Hubs to measure position and orientation, we conducted an academic workshop to facilitate a range of typical workshop activities. We analysed social interactions during a keynote, small group breakouts, and informal networking/hallway conversations. Our mixed-methods approach combined environment logging, observations, and semi-structured interviews. The results demonstrate how small and large spaces influenced group formation, shared attention, and personal space, where smaller rooms facilitated more cohesive groups while larger rooms made small group formation challenging but personal space more flexible. Beyond our findings, we show how the combination of data and insights can fuel collaborative spaces' design and deliver more effective virtual workshops.
18
XRgonomics: Facilitating the Creation of Ergonomic 3D Interfaces
João Marcelo. Evangelista Belo (Aarhus University, Aarhus, Denmark)Anna Maria. Feit (ETH Zurich, Zurich, Switzerland)Tiare Feuchtner (Aarhus University, Aarhus, Denmark)Kaj Grønbæk (Aarhus University, Aarhus, Denmark)
Arm discomfort is a common issue in Cross Reality applications involving prolonged mid-air interaction. Solving this problem is difficult because of the lack of tools and guidelines for 3D user interface design. Therefore, we propose a method to make existing ergonomic metrics available to creators during design by estimating the interaction cost at each reachable position in the user's environment. We present XRgonomics, a toolkit to visualize the interaction cost and make it available at runtime, allowing creators to identify UI positions that optimize users' comfort. Two scenarios show how the toolkit can support 3D UI design and dynamic adaptation of UIs based on spatial constraints. We present results from a walkthrough demonstration, which highlight the potential of XRgonomics to make ergonomics metrics accessible during the design and development of 3D UIs. Finally, we discuss how the toolkit may address design goals beyond ergonomics.
17
STREAM: Exploring the Combination of Spatially-Aware Tablets with Augmented Reality Head-Mounted Displays for Immersive Analytics
Sebastian Hubenschmid (University of Konstanz, Konstanz, Germany)Johannes Zagermann (University of Konstanz, Konstanz, Germany)Simon Butscher (University of Konstanz, Konstanz, Germany)Harald Reiterer (University of Konstanz, Konstanz, Germany)
Recent research in the area of immersive analytics demonstrated the utility of head-mounted augmented reality devices for visual data analysis. However, it can be challenging to use the by default supported mid-air gestures to interact with visualizations in augmented reality (e.g. due to limited precision). Touch-based interaction (e.g. via mobile devices) can compensate for these drawbacks, but is limited to two-dimensional input. In this work we present STREAM: Spatially-aware Tablets combined with Augmented Reality Head-Mounted Displays for the multimodal interaction with 3D visualizations. We developed a novel eyes-free interaction concept for the seamless transition between the tablet and the augmented reality environment. A user study reveals that participants appreciated the novel interaction concept, indicating the potential for spatially-aware tablets in augmented reality. Based on our findings, we provide design insights to foster the application of spatially-aware touch devices in augmented reality and research implications indicating areas that need further investigation.
16
Reading in VR: The Effect of Text Presentation Type and Location
Rufat Rzayev (University of Regensburg, Regensburg, Germany)Polina Ugnivenko (University of Regensburg, Regensburg, Germany)Sarah Graf (University of Regensburg, Regensburg, Germany)Valentin Schwind (Frankfurt University of Applied Sciences, Frankfurt, Germany)Niels Henze (University of Regensburg, Regensburg, Germany)
Reading is a fundamental activity to obtain information both in the real and the digital world. Virtual reality (VR) allows novel approaches for users to view, read, and interact with a text. However, for efficient reading, it is necessary to understand how a text should be displayed in VR without impairing the VR experience. Therefore, we conducted a study with 18 participants to investigate text presentation type and location in VR. We compared world-fixed, edge-fixed, and head-fixed text locations. Texts were displayed using Rapid Serial Visual Presentation (RSVP) or as a paragraph. We found that RSVP is a promising presentation type for reading short texts displayed in edge-fixed or head-fixed location in VR. The paragraph presentation type using world-fixed or edge-fixed location is promising for reading long text if movement in the virtual environment is not required. Insights from our study inform the design of reading interfaces for VR applications.
16
Vinci: An Intelligent Graphic Design System for Generating Advertising Posters
Shunan Guo (Tongji University, ShangHai, China)Zhuochen Jin (Tongji University, Shanghai, China)Fuling Sun (Tongji University, Shanghai, China)Jingwen Li (Intelligent Big Data Visualization Lab, Tongji University, China, Shanghai, China)Zhaorui Li (Tongji University, Shanghai, China)Yang Shi (Tongji College of Design and Innovation, Shanghai, China)Nan Cao (Tongji College of Design and Innovation, Shanghai, China)
Advertising posters are a commonly used form of information presentation to promote a product. Producing advertising posters often takes much time and effort of designers when confronted with abundant choices of design elements and layouts. This paper presents Vinci, an intelligent system that supports the automatic generation of advertising posters. Given the user-specified product image and taglines, Vinci uses a deep generative model to match the product image with a set of design elements and layouts for generating an aesthetic poster. The system also integrates online editing-feedback that supports users in editing the posters and updating the generated results with their design preference. Through a series of user studies and a Turing test, we found that Vinci can generate posters as good as human designers and that the online editing-feedback improves the efficiency in poster modification.
16
ThermoCaress: A Wearable Haptic Device with Illusory Moving Thermal Stimulation
Yuhu Liu (The University of Tokyo, Tokyo, Japan)Satoshi Nishikawa (The University of Tokyo, Tokyo, Japan)Young ah Seong (Hosei University, Tokyo, Japan)Ryuma Niiyama (The University of Tokyo, Tokyo, Japan)Yasuo Kuniyoshi (The University of Tokyo, Tokyo, Japan)
We propose ThermoCaress, a haptic device to create a stroking sensation on the forearm using pressure force and present thermal feedback simultaneously. In our method, based on the phenomenon of thermal referral, by overlapping a stroke of pressure force, users feel as if the thermal stimulation moves although the position of temperature source is static. We designed the device to be compact and soft, using microblowers and inflatable pouches for presenting pressure force and water for presenting thermal feedback. Our user study showed that the device succeeded in generating thermal referrals and creating a moving thermal illusion. The results also suggested that cold temperature enhance the pleasantness of stroking. Our findings contribute to expanding the potential of thermal haptic devices.
16
Sketchnote Components, Design Space Dimensions, and Strategies for Effective Visual Note Taking
Rebecca Zheng (University College London, London, United Kingdom)Marina Fernández Camporro (University College London, London, United Kingdom)Hugo Romat (ETH, Zurich, Switzerland)Nathalie Henry Riche (Microsoft Research, Redmond, Washington, United States)Benjamin Bach (Edinburgh University, Edinburgh, United Kingdom)Fanny Chevalier (University of Toronto, Toronto, Ontario, Canada)Ken Hinckley (Microsoft Research, Redmond, Washington, United States)Nicolai Marquardt (University College London, London, United Kingdom)
Sketchnoting is a form of visual note taking where people listen to, synthesize, and visualize ideas from a talk or other event using a combination of pictures, diagrams, and text. Little is known about the design space of this kind of visual note taking. With an eye towards informing the implementation of digital equivalents of sketchnoting, inking, and note taking, we introduce a classification of sketchnote styles and techniques, with a qualitative analysis of 103 sketchnotes, and situated in context with six semi-structured follow up interviews. Our findings distill core sketchnote components (content, layout, structuring elements, and visual styling) and dimensions of the sketchnote design space, classifying levels of conciseness, illustration, structure, personification, cohesion, and craftsmanship. We unpack strategies to address particular note taking challenges, for example dealing with constraints of live drawings, and discuss relevance for future digital inking tools, such as recomposition, styling, and design suggestions.
16
Stereo-Smell via Electrical Trigeminal Stimulation
Jas Brooks (University of Chicago, Chicago, Illinois, United States)Shan-Yuan Teng (University of Chicago, Chicago, Illinois, United States)Jingxuan Wen (University of Chicago, Chicago, Illinois, United States)Romain Nith (University of Chicago, Chicago, Illinois, United States)Jun Nishida (University of Chicago, Chicago, Illinois, United States)Pedro Lopes (University of Chicago, Chicago, Illinois, United States)
We propose a novel type of olfactory device that creates a stereo-smell experience, i.e., directional information about the location of an odor, by rendering the readings of external odor sensors as trigeminal sensations using electrical stimulation of the user’s nasal septum. The key is that the sensations from the trigeminal nerve, which arise from nerve-endings in the nose, are perceptually fused with those of the olfactory bulb (the brain region that senses smells). As such, we propose that electrically stimulating the trigeminal nerve is an ideal candidate for stereo-smell augmentation/substitution that, unlike other approaches, does not require implanted electrodes in the olfactory bulb. To realize this, we engineered a self-contained device that users wear across their nasal septum. Our device outputs by stimulating the user’s trigeminal nerve using electrical impulses with variable pulse-widths; and it inputs by sensing the user’s inhalations using a photoreflector. It measures 10x23 mm and communicates with external gas sensors using Bluetooth. In our user study, we found the key electrical waveform parameters that enable users to feel an odor’s intensity (absolute electric charge) and direction (phase order and net charge). In our second study, we demonstrated that participants were able to localize a virtual smell source in the room by using our prototype without any previous training. Using these insights, our device enables expressive trigeminal sensations and could function as an assistive device for people with anosmia, who are unable to smell.
16
Understanding the Design Space of Embodied Passwords based on Muscle Memory
Rosa van Koningsbruggen (Bauhaus-Universität Weimar, Weimar, Germany)Bart Hengeveld (Eindhoven University of Technology, Eindhoven, Netherlands)Jason Alexander (University of Bath, Bath, United Kingdom)
Passwords have become a ubiquitous part of our everyday lives, needed for every web-service and system. However, it is challenging to create safe and diverse alphanumeric passwords, and to recall them, imposing a cognitive burden on the user. Through consecutive experiments, we explored the movement space, affordances and interaction, and memorability of a tangible, handheld, embodied password. In this context, we found that: (1) a movement space of 200 mm × 200 mm is preferred; (2) each context has a perceived level of safety, which—together with the affordances and link to familiarity—influences how the password is performed. Furthermore, the artefact’s dimensions should be balanced within the design itself, with the user, and the context, but there is a trade-off between the perceived safety and ergonomics; and (3) the designed embodied passwords can be recalled for at least a week, with participants creating unique passwords which were reproduced consistently.
16
Improving Viewing Experiences of First-Person Shooter Gameplays with Automatically-Generated Motion Effects
Gyeore Yun (POSTECH, Pohang, Korea, Republic of)Hyoseung Lee (POSTECH, Pohang, Gyeongsangbuk-do, Korea, Republic of)Sangyoon Han (Pohang University of Science and Technology (POSTECH), Pohang, Korea, Republic of)Seungmoon Choi (Pohang University of Science and Technology (POSTECH), Pohang, Gyeongbuk, Korea, Republic of)
In recent times, millions of people enjoy watching video gameplays at an eSports stadium or home. We seek a method that improves gameplay spectator or viewer experiences by presenting multisensory stimuli. Using a motion chair, we provide the motion effects automatically generated from the audiovisual stream to the viewers watching a first-person shooter (FPS) gameplay. The motion effects express the game character’s movement and gunfire action. We describe algorithms for the computation of such motion effects developed using computer vision techniques and deep learning. By a user study, we demonstrate that our method of providing motion effects significantly improves the viewing experiences of FPS gameplay. The contributions of this paper are with the motion synthesis algorithms integrated for FPS games and the empirical evidence for the benefits of experiencing multisensory gameplays.
16
IdeaBot: Investigating Social Facilitation in Human-Machine Team Creativity
Angel Hsing-Chi Hwang (Cornell University, Ithaca, New York, United States)Andrea Stevenson Won (Cornell University, Ithaca, New York, United States)
The present study investigates how human subjects collaborate with a computer-mediated chatbot in creative idea generation tasks. In three text-based between-group studies, we tested whether the perceived identity (i.e.,whether the bot is perceived as a machine or as a human) or the conversational style of a teammate would moderate the outcomes of participants’ creative production. In Study 1, participants worked with either a chatbot or a human confederate. In Study 2, all participants worked with a human teammate but were informed that their partner was either a human or a chatbot. Conversely, all participants worked with a chatbot in Study 3, but were told the identity of their partner was either a chatbot or a human. We investigated differences in idea generation outcomes and found that participants consistently contributed more ideas and with ideas of higher quality when they perceived their teamworking partner as a bot. Furthermore, when the conversational style of the partner was robotic, participants with high anxiety in group communication reported greater creative self-efficacy in task performance. Finally, whether the perceived dominance of a partner and the pressure to come up with ideas during the task mediated positive outcomes of idea generation also depends on whether the conversational style of the bot partner was robot- or human-like. Based on our findings, we discussed implications for future design of artificial agents as active team players in collaboration tasks.
15
Gaze-Supported 3D Object Manipulation in Virtual Reality
Difeng Yu (The University of Melbourne, Melbourne, VIC, Australia)Xueshi Lu (Xi'an Jiaotong-Liverpool University, Suzhou, Jiangsu, China)Rongkai Shi (Xi'an Jiaotong-Liverpool University, Suzhou, Jiangsu, China)Hai-Ning Liang (Xi'an Jiaotong-Liverpool University, Suzhou, Jiangsu, China)Tilman Dingler (The University of Melbourne, Melbourne, VIC, Australia)Eduardo Velloso (The University of Melbourne, Melbourne, VIC, Australia)Jorge Goncalves (The University of Melbourne, Melbourne, VIC, Australia)
This paper investigates integration, coordination, and transition strategies of gaze and hand input for 3D object manipulation in VR. Specifically, this work aims to understand whether incorporating gaze input can benefit VR object manipulation tasks, and how it should be combined with hand input for improved usability and efficiency. We designed four gaze-supported techniques that leverage different combination strategies for object manipulation and evaluated them in two user studies. Overall, we show that gaze did not offer significant performance benefits for transforming objects in the primary working space, where all objects were located in front of the user and within the arm-reach distance, but can be useful for a larger environment with distant targets. We further offer insights regarding combination strategies of gaze and hand input, and derive implications that can help guide the design of future VR systems that incorporate gaze input for 3D object manipulation.
15
Physiological and Perceptual Responses to Athletic Avatars while Cycling in Virtual Reality
Martin Kocur (University of Regensburg, Regensburg, Germany)Florian Habler (University of Regensburg, Regensburg, Germany)Valentin Schwind (Frankfurt University of Applied Sciences, Frankfurt, Germany)Paweł W. Woźniak (Utrecht University, Utrecht, Netherlands)Christian Wolff (University of Regensburg, Regensburg, Bavaria, Germany)Niels Henze (University of Regensburg, Regensburg, Germany)
Avatars in virtual reality (VR) enable embodied experiences and induce the Proteus effect - a shift in behavior and attitude to mimic one's digital representation. Previous work found that avatars associated with physical strength can decrease users' perceived exertion when performing physical tasks. However, it is unknown if an avatar's appearance can also influence the user's physiological response to exercises. Therefore, we conducted an experiment with 24 participants to investigate the effect of avatars' athleticism on heart rate and perceived exertion while cycling in VR following a standardized protocol. We found that the avatars' athleticism has a significant and systematic effect on users' heart rate and perceived exertion. We discuss potential moderators such as body ownership and users' level of fitness. Our work contributes to the emerging area of VR exercise systems.
15
Visuo-haptic Illusions for Linear Translation and Stretching using Physical Proxies in Virtual Reality
Martin Feick (Saarland Informatics Campus, Saarbrücken, Germany)Niko Kleer (Saarland Informatics Campus, Saarbrücken, Germany)André Zenner (Saarland University, Saarland Informatics Campus, Saarbrücken, Germany)Anthony Tang (University of Toronto, Toronto, Ontario, Canada)Antonio Krüger (DFKI, Saarland Informatics Campus, Saarbrücken, Germany)
Providing haptic feedback when manipulating virtual objects is an essential part of immersive virtual reality experiences; however, it is challenging to replicate all of an object’s properties and characteristics. We propose the use of visuo-haptic illusions alongside physical proxies to enhance the scope of proxy-based interactions with virtual objects. In this work, we focus on two manipulation techniques, linear translation and stretching across different distances, and investigate how much discrepancy between the physical proxy and the virtual object may be introduced without participants noticing. In a study with 24 participants, we found that manipulation technique and travel distance significantly affect the detection thresholds, and that visuo-haptic illusions impact performance and accuracy. We show that this technique can be used to enable functional proxy objects that act as stand-ins for multiple virtual objects, illustrating the technique through a showcase VR-DJ application.
15
Oh, Snap! A Fabrication Pipeline to Magnetically Connect Conventional and 3D-Printed Electronics
Martin Schmitz (Technical University of Darmstadt, Darmstadt, Germany)Jan Riemann (Technical University of Darmstadt, Darmstadt, Germany)Florian Müller (TU Darmstadt, Darmstadt, Germany)Steffen Kreis (TU Darmstadt, Darmstadt, Germany)Max Mühlhäuser (TU Darmstadt, Darmstadt, Germany)
3D printing has revolutionized rapid prototyping by speeding up the creation of custom-shaped objects. With the rise of multi-material 3Dprinters, these custom-shaped objects can now be made interactive in a single pass through passive conductive structures. However, connecting conventional electronics to these conductive structures often still requires time-consuming manual assembly involving many wires, soldering or gluing. To alleviate these shortcomings, we propose Oh, Snap!: a fabrication pipeline and interfacing concept to magnetically connect a 3D-printed object equipped with passive sensing structures to conventional sensing electronics. To this end, Oh, Snap! utilizes ferromagnetic and conductive 3D-printed structures, printable in a single pass on standard printers. We further present a proof-of-concept capacitive sensing board that enables easy and robust magnetic assembly to quickly create interactive 3D-printed objects. We evaluate Oh, Snap! by assessing the robustness and quality of the connection and demonstrate its broad applicability by a series of example applications.
15
Bad Breakdowns, Useful Seams, and Face Slapping: Analysis of VR Fails on YouTube
Emily Dao (Monash University, Melbourne, Victoria, Australia)Andreea Muresan (University of Copenhagen, Copenhagen, Denmark)Kasper Hornbæk (University of Copenhagen, Copenhagen, Denmark)Jarrod Knibbe (University of Melbourne, Melbourne, Australia)
Virtual reality (VR) is increasingly used in complex social and physical settings outside of the lab. However, not much is known about how these settings influence use, nor how to design for them. We analyse 233 YouTube videos of VR Fails to: (1) understand when breakdowns occur, and (2) reveal how the seams between VR use and the social and physical setting emerge. The videos show a variety of fails, including users flailing, colliding with surroundings, and hitting spectators. They also suggest causes of the fails, including fear, sensorimotor mismatches, and spectator participation. We use the videos as inspiration to generate design ideas. For example, we discuss more flexible boundaries between the real and virtual world, ways of involving spectators, and interaction designs to help overcome fear. Based on the findings, we further discuss the ‘moment of breakdown’ as an opportunity for designing engaging and enhanced VR experiences.
14
More Kawaii than a Real-Person Streamer: Understanding How the Otaku Community Engages with and Perceives Virtual YouTubers
Zhicong Lu (City University of Hong Kong, Hong Kong, China)Chenxinran Shen (University of Toronto, Toronto, Ontario, Canada)Jiannan Li (University of Toronto, Toronto, Ontario, Canada)Hong Shen (Carnegie Mellon University , Pittsburgh, Pennsylvania, United States)Daniel Wigdor (University of Toronto, Toronto, Ontario, Canada)
Live streaming has become increasingly popular, with most streamers presenting their real-life appearance. However, Virtual YouTubers (VTubers), virtual 2D or 3D avatars that are voiced by humans, are emerging as live streamers and attracting a growing viewership in East Asia. Although prior research has found that many viewers seek real-life interpersonal interactions with real-person streamers, it is currently unknown what makes VTuber live streams engaging or how they are perceived differently than real-person streamers. We conducted an interview study to understand how viewers engage with VTubers and perceive the identities of the voice actors behind the avatars (i.e., Nakanohito). The data revealed that Virtual avatars bring unique performative opportunities which result in different viewer expectations and interpretations of VTuber behavior. Viewers intentionally upheld the disembodiment of VTuber avatars from their voice actors. We uncover the nuances in viewer perceptions and attitudes and further discuss the implications of VTuber practices to the understanding of live streaming in general.
14
Radi-Eye: Hands-free Radial Interfaces for 3D Interaction using Gaze-activated Head-crossing
Ludwig Sidenmark (Lancaster University, Lancaster, United Kingdom)Dominic Potts (Lancaster University, Lancaster, Lancashire, United Kingdom)Bill Bapisch (Ludwig-Maximilians-Universität, Munich, Germany)Hans Gellersen (Aarhus University, Aarhus, Denmark)
Eye gaze and head movement are attractive for hands-free 3D interaction in head-mounted displays, but existing interfaces afford only limited control. Radi-Eye is a novel pop-up radial interface designed to maximise expressiveness with input from only the eyes and head. Radi-Eye provides widgets for discrete and continuous input and scales to support larger feature sets. Widgets can be selected with Look & Cross, using gaze for pre-selection followed by head-crossing as trigger and for manipulation. The technique leverages natural eye-head coordination where eye and head move at an offset unless explicitly brought into alignment, enabling interaction without risk of unintended input. We explore Radi-Eye in three augmented and virtual reality applications, and evaluate the effect of radial interface scale and orientation on performance with Look & Cross. The results show that Radi-Eye provides users with fast and accurate input while opening up a new design space for hands-free fluid interaction.
14
TexYZ: Embroidering Enameled Wires for Three Degree-of-Freedom Mutual Capacitive Sensing
Roland Aigner (University of Applied Sciences Upper Austria, Hagenberg, Austria)Andreas Pointner (University of Applied Sciences Upper Austria, Hagenberg, Austria)Thomas Preindl (University of Applied Sciences Upper Austria, Hagenberg, Austria)Rainer Danner (University of Applied Sciences Upper Austria, Hagenberg, Austria)Michael Haller (University of Applied Sciences Upper Austria, Hagenberg, Austria)
In this paper, we present TexYZ, a method for rapid and effortless manufacturing of textile mutual capacitive sensors using a commodity embroidery machine. We use enameled wire as a bobbin thread to yield textile capacitors with high quality and consistency. As a consequence, we are able to leverage the precision and expressiveness of projected mutual capacitance for textile electronics, even when size is limited. Harnessing the assets of machine embroidery, we implement and analyze five distinct electrode patterns, examine the resulting electrical features with respect to geometrical attributes, and demonstrate the feasibility of two promising candidates for small-scale matrix layouts. The resulting sensor patches are further evaluated in terms of capacitance homogeneity, signal-to-noise ratio, sensing range, and washability. Finally, we demonstrate two use case scenarios, primarily focusing on continuous input with up to three degrees-of-freedom.
14
Standardizing Participant Compensation Reporting in HCI: A Meta-Review and Recommendations for the Field
Jessica Pater (Parkview Health, Fort Wayne, Indiana, United States)Amanda Coupe (Parkview Health, Fort Wayne, Indiana, United States)Rachel Pfafman (Parkview Health, Fort Wayne, Indiana, United States)Chanda Phelan (University of Michigan, Ann Arbor, Michigan, United States)Tammy Toscos (Parkview Health, Fort Wayne, Indiana, United States)Maia Jacobs (Northwestern University, Evanston, Illinois, United States)
The user study is a fundamental method used in HCI. In designing user studies, we often use compensation strategies to incentivize recruitment. However, compensation can also lead to ethical issues, such as coercion. The CHI community has yet to establish best practices for participant compensation. Through a systematic review of manuscripts at CHI and other associated publication venues, we found high levels of variation in the compensation strategies used within the community and how we report on this aspect of the study methods. A qualitative analysis of justifications offered for compensation sheds light into how some researchers are currently contextualizing this practice. This paper provides a description of current compensation strategies and information that can inform the design of compensation strategies in future studies. The findings may be helpful to generate productive discourse in the HCI community towards the development of best practices for participant compensation in user studies.
14
Large Scale Analysis of Multitasking Behavior During Remote Meetings
Hancheng Cao (Stanford University, Stanford, California, United States)Chia-Jung Lee (Amazon, Seattle, Washington, United States)Shamsi Iqbal (Microsoft Research, Redmond, Washington, United States)Mary Czerwinski (Microsoft Research, Redmond, Washington, United States)Priscilla N Y. Wong (UCL Interaction Centre, London, United Kingdom)Sean Rintel (Microsoft Research, Cambridge, United Kingdom)Brent Hecht (Microsoft, Redmond, Washington, United States)Jaime Teevan (Microsoft, Redmond, Washington, United States)Longqi Yang (Microsoft, Redmond, Washington, United States)
Virtual meetings are critical for remote work because of the need for synchronous collaboration in the absence of in-person interactions. In-meeting multitasking is closely linked to people's productivity and wellbeing. However, we currently have limited understanding of multitasking in remote meetings and its potential impact. In this paper, we present what we believe is the most comprehensive study of remote meeting multitasking behavior through an analysis of a large-scale telemetry dataset collected from February to May 2020 of U.S. Microsoft employees and a 715-person diary study. Our results demonstrate that intrinsic meeting characteristics such as size, length, time, and type, significantly correlate with the extent to which people multitask, and multitasking can lead to both positive and negative outcomes. Our findings suggest important best-practice guidelines for remote meetings (e.g., avoid important meetings in the morning) and design implications for productivity tools (e.g., support positive remote multitasking).
14
GuideCopter - A Precise Drone-Based Haptic Guidance Interface for Blind or Visually Impaired People
Felix Huppert (University of Passau, Passau, Bavaria, Germany)Gerold Hoelzl (University of Passau, Passau, Bavaria, Germany)Matthias Kranz (University of Passau, Passau, Bavaria, Germany)
Drone assisted navigation aids for supporting walking activities of visually impaired have been established in related work but fine-point object grasping tasks and the object localization in unknown environments still presents an open and complex challenge. We present a drone-based interface that provides fine-grain haptic feedback and thus physically guides them in hand-object localization tasks in unknown surroundings. Our research is built around community groups of blind or visually impaired (BVI) people, which provide in-depth insights during the development process and serve later as study participants. A pilot study infers users' sensibility to applied guiding stimuli forces and the different human-drone tether interfacing possibilities. In a comparative follow-up study, we show that our drone-based approach achieves greater accuracy compared to a current audio-based hand guiding system and delivers overall a more intuitive and relatable fine-point guiding experience.
14
Pose-on-the-Go: Approximating User Pose with Smartphone Sensor Fusion and Inverse Kinematics
Karan Ahuja (Carnegie Mellon University, Pittsburgh, Pennsylvania, United States)Sven Mayer (Carnegie Mellon University, Pittsburgh, Pennsylvania, United States)Mayank Goel (Carnegie Mellon University, Pittsburgh, Pennsylvania, United States)Chris Harrison (Carnegie Mellon University, Pittsburgh, Pennsylvania, United States)
We present Pose-on-the-Go, a full-body pose estimation system that uses sensors already found in today’s smartphones. This stands in contrast to prior systems, which require worn or external sensors. We achieve this result via extensive sensor fusion, leveraging a phone’s front and rear cameras, the user-facing depth camera, touchscreen, and IMU. Even still, we are missing data about a user’s body (e.g., angle of the elbow joint), and so we use inverse kinematics to estimate and animate probable body poses. We provide a detailed evaluation of our system, benchmarking it against a professional-grade Vicon tracking system. We conclude with a series of demonstration applications that underscore the unique potential of our approach, which could be enabled on many modern smartphones with a simple software update.
14
Increasing Electrical Muscle Stimulation’s Dexterity by means of Back of the Hand Actuation
Akifumi Takahashi (University of Chicago, Chicago, Illinois, United States)Jas Brooks (University of Chicago, Chicago, Illinois, United States)Hiroyuki Kajimoto (The University of Electro-Communications, Chofu, Tokyo, Japan)Pedro Lopes (University of Chicago, Chicago, Illinois, United States)
We propose a technique that allows an unprecedented level of dexterity in electrical muscle stimulation (EMS), i.e., it allows interactive EMS-based devices to flex the user’s fingers independently of each other. EMS is a promising technique for force feedback because of its small form factor when compared to mechanical actuators. However, the current EMS approach to flexing the user’s fingers (i.e., attaching electrodes to the base of the forearm, where finger muscles anchor) is limited by its inability to flex a target finger’s metacarpophalangeal (MCP) joint independently of the other fingers. In other words, current EMS devices cannot flex one finger alone, they always induce unwanted actuation to adjacent fingers. To tackle the lack of dexterity, we propose and validate a new electrode layout that places the electrodes on the back of the hand, where they stimulate the interossei/lumbricals muscles in the palm, which have never received attention with regards to EMS. In our user study, we found that our technique offers four key benefits when compared to existing EMS electrode layouts: our technique (1) flexes all four fingers around the MCP joint more independently; (2) has less unwanted flexion of other joints (such as the proximal interphalangeal joint); (3) is more robust to wrist rotations; and (4) reduces calibration time. Therefore, our EMS technique enables applications for interactive EMS systems that require a level of flexion dexterity not available until now. We demonstrate the improved dexterity with four example applications: three musical instrumental tutorials (piano, drum, and guitar) and a VR application that renders force feedback in individual fingers while manipulating a yo-yo.
14
MIRIA: A Mixed Reality Toolkit for the In-Situ Visualization and Analysis of Spatio-Temporal Interaction Data
Wolfgang Büschel (Technische Universität Dresden, Dresden, Germany)Anke Lehmann (Technische Universität Dresden, Dresden, Germany)Raimund Dachselt (Technische Universität Dresden, Dresden, Germany)
In this paper, we present MIRIA, a Mixed Reality Interaction Analysis toolkit designed to support the in-situ visual analysis of user interaction in mixed reality and multi-display environments. So far, there are few options to effectively explore and analyze interaction patterns in such novel computing systems. With MIRIA, we address this gap by supporting the analysis of user movement, spatial interaction, and event data by multiple, co-located users directly in the original environment. Based on our own experiences and an analysis of the typical data, tasks, and visualizations used in existing approaches, we identify requirements for our system. We report on the design and prototypical implementation of MIRIA, which is informed by these requirements and offers various visualizations such as 3D movement trajectories, position heatmaps, and scatterplots. To demonstrate the value of MIRIA for real-world analysis tasks, we conducted expert feedback sessions using several use cases with authentic study data.
14
Data@Hand: Fostering Visual Exploration of Personal Data on Smartphones Leveraging Speech and Touch Interaction
Young-Ho Kim (University of Maryland, College Park, Maryland, United States)Bongshin Lee (Microsoft Research, Redmond, Washington, United States)Arjun Srinivasan (Tableau Research, Seattle, Washington, United States)Eun Kyoung Choe (University of Maryland, College Park, Maryland, United States)
Most mobile health apps employ data visualization to help people view their health and activity data, but these apps provide limited support for visual data exploration. Furthermore, despite its huge potential benefits, mobile visualization research in the personal data context is sparse. This work aims to empower people to easily navigate and compare their personal health data on smartphones by enabling flexible time manipulation with speech. We designed and developed Data@Hand, a mobile app that leverages the synergy of two complementary modalities: speech and touch. Through an exploratory study with 13 long-term Fitbit users, we examined how multimodal interaction helps participants explore their own health data. Participants successfully adopted multimodal interaction (i.e., speech and touch) for convenient and fluid data exploration. Based on the quantitative and qualitative findings, we discuss design implications and opportunities with multimodal interaction for better supporting visual data exploration on mobile devices.
14
Towards “Avatar-Friendly” 3D Manipulation Techniques: Bridging the Gap Between Sense of Embodiment and Interaction in Virtual Reality
Diane Dewez (Inria, Rennes, France)Ludovic Hoyet (Inria, Rennes, France)Anatole Lécuyer (Inria, Rennes, France)Ferran Argelaguet Sanz (Inria, Rennes, France)
Avatars, the users' virtual representations, are becoming ubiquitous in virtual reality applications. In this context, the avatar becomes the medium which enables users to manipulate objects in the virtual environment. It also becomes the users' main spatial reference, which can not only alter their interaction with the virtual environment, but also the perception of themselves. In this paper, we review and analyse the current state-of-the-art for 3D object manipulation and the sense of embodiment. Our analysis is twofold. First, we discuss the impact that the avatar can have on object manipulation. Second, we discuss how the different components of a manipulation technique (i.e. input, control and feedback) can influence the user’s sense of embodiment. Throughout the analysis, we crystallise our discussion with practical guidelines for VR application designers and we propose several research topics towards ``avatar-friendly’’ manipulation techniques.
14
MeetingCoach: An Intelligent Dashboard for Supporting Effective & Inclusive Meetings
Samiha Samrose (University of Rochester, Rochester, New York, United States)Daniel McDuff (Microsoft, Seattle, Washington, United States)Robert Sim (Microsoft, Redmond, Washington, United States)Jina Suh (Microsoft Research, Redmond, Washington, United States)Kael Rowan (Microsoft Research, Redmond, Washington, United States)Javier Hernandez (Microsoft Research, Cambridge, Massachusetts, United States)Sean Rintel (Microsoft Research, Cambridge, United Kingdom)Kevin Moynihan (Microsoft Research, Barcelona, Spain)Mary Czerwinski (Microsoft Research, Redmond, Washington, United States)
Video-conferencing is essential for many companies, but its limitations in conveying social cues can lead to ineffective meetings. We present MeetingCoach, an intelligent post-meeting feedback dashboard that summarizes contextual and behavioral meeting information. Through an exploratory survey (N=120), we identified important signals (e.g., turn taking, sentiment) and used these insights to create a wireframe dashboard. The design was evaluated with in situ participants (N=16) who helped identify the components they would prefer in a post-meeting dashboard. After recording video-conferencing meetings of eight teams over four weeks, we developed an AI system to quantify the meeting features and created personalized dashboards for each participant. Through interviews and surveys (N=23), we found that reviewing the dashboard helped improve attendees' awareness of meeting dynamics, with implications for improved effectiveness and inclusivity. Based on our findings, we provide suggestions for future feedback system designs of video-conferencing meetings.
13
Grand Challenges in Immersive Analytics
Barrett Ens (Monash University, Melbourne, Australia)Benjamin Bach (Edinburgh University, Edinburgh, United Kingdom)Maxime Cordeil (Monash University, Melbourne, Australia)Ulrich Engelke (CSIRO, Kensington, WA, Australia)Marcos Serrano (IRIT - Elipse, Toulouse, France)Wesley Willett (University of Calgary, Calgary, Alberta, Canada)Arnaud Prouzeau (Monash University, Melbourne, Australia)Christoph Anthes (University of Applied Sciences Upper Austria, Hagenberg, Austria)Wolfgang Büschel (Technische Universität Dresden, Dresden, Germany)Cody Dunne (Northeastern University, Boston, Massachusetts, United States)Tim Dwyer (Monash University, Melbourne, Australia)Jens Grubert (Coburg University, Coburg, Bavaria, Germany)Jason Haga (AIST, Tsukuba, Ibaraki, Japan)Nurit Kirshenbaum (University of Hawaii at Manoa, Honolulu, Hawaii, United States)Dylan Kobayashi (University of Hawaiʻi at Mānoa, Honolulu, Hawaii, United States)Tica Lin (Harvard University, Cambridge, Massachusetts, United States)Monsurat Olaosebikan (Tufts University, Medford, Massachusetts, United States)Fabian Pointecker (University of Applied Sciences Upper Austria, Hagenberg, Austria)David Saffo (Northeastern University, Boston, Massachusetts, United States)Nazmus Saquib (MIT, Cambridge, Massachusetts, United States)Dieter Schmalstieg (Graz University of Technology, Graz, Austria)Danielle Albers. Szafir (University of Colorado Boulder, Boulder, Colorado, United States)Matt Whitlock (University of Colorado, Boulder, Colorado, United States)Yalong Yang (Harvard University, Cambridge, Massachusetts, United States)
Immersive Analytics is a quickly evolving field that unites several areas such as visualisation, immersive environments, and human-computer interaction to support human data analysis with emerging technologies. This research has thrived over the past years with multiple workshops, seminars, and a growing body of publications, spanning several conferences. Given the rapid advancement of interaction technologies and novel application domains, this paper aims toward a broader research agenda to enable widespread adoption. We present 17 key research challenges developed over multiple sessions by a diverse group of 24 international experts, initiated from a virtual scientific workshop at ACM CHI 2020. These challenges aim to coordinate future work by providing a systematic roadmap of current directions and impending hurdles to facilitate productive and effective applications for Immersive Analytics.
13
Phonetroller: Visual Representations of Fingers for Precise Touch Input when using a Phone in VR
Fabrice Matulic (Preferred Networks Inc., Tokyo, Japan)Aditya Ganeshan (Preferred Networks Inc., Tokyo, Japan)Hiroshi Fujiwara (Preferred Networks Inc., Tokyo, Japan)Daniel Vogel (University of Waterloo, Waterloo, Ontario, Canada)
Smartphone touch screens are potentially attractive for interaction in virtual reality (VR). However, the user cannot see the phone or their hands in a fully immersive VR setting, impeding their ability for precise touch input. We propose mounting a mirror above the phone screen such that the front-facing camera captures the thumbs on or near the screen. This enables the creation of semi-transparent overlays of thumb shadows and inference of fingertip hover points with deep learning, which help the user aim for targets on the phone. A study compares the effect of visual feedback on touch precision in a controlled task and qualitatively evaluates three example applications demonstrating the potential of the technique. The results show that the enabled style of feedback is effective for thumb-size targets, and that the VR experience can be enriched by using smartphones as VR controllers supporting precise touch input.
12
SoniBand: Understanding the Effects of Metaphorical Movement Sonifications on Body Perception and Physical Activity
Judith Ley-Flores (Universidad Carlos III de Madrid, Leganes, Madrid, Spain)Laia Turmo Vidal (Uppsala University, Uppsala, Sweden)Nadia Berthouze (University College London, London, United Kingdom)Aneesha Singh (University College London, London, United Kingdom)Frederic Bevilacqua (STMS IRCAM-CNRS-Sorbonne Université, Paris, France)Ana Tajadura-Jiménez (Universidad Carlos III de Madrid / University College London, Madrid / London, Spain)
Negative body perceptions are a major predictor of physical inactivity, a serious health concern. Sensory feedback can be used to alter such body perception; movement sonification, in particular, has been suggested to affect body perception and levels of physical activity (PA) in inactive people. We investigated how metaphorical sounds impact body perception and PA. We report two qualitative studies centered on performing different strengthening/flexibility exercises using SoniBand, a wearable that augments movement through different sounds. The first study involved physically active participants and served to obtain a nuanced understanding of the sonifications’ impact. The second, in the home of physically inactive participants, served to identify which effects could support PA adherence. Our findings show that movement sonification based on metaphors led to changes in body perception (e.g., feeling strong) and PA (e.g., repetitions) in both populations, but effects could differ according to the existing PA-level. We discuss principles for metaphor-based sonification design to foster PA.
12
Tele-Immersive Improv: Effects of Immersive Visualisations on Rehearsing and Performing Theatre Online
Boyd Branch (University of Kent, Canterbury, Kent, United Kingdom)Christos Efstratiou (University of Kent, Canterbury, Kent, United Kingdom)Piotr Mirowski (HumanMachine, London, United Kingdom)Kory W. Mathewson (University of Alberta, Edmonton, Alberta, Canada)Paul Allain (University of Kent, Canterbury, United Kingdom)
Performers acutely need but lack tools to remotely rehearse and create live theatre, particularly due to global restrictions on social interactions during the Covid-19 pandemic. No studies, however, have heretofore examined how remote video-collaboration affects performance. This paper presents the findings of a field study with 16 domain experts over six weeks investigating how tele-immersion affects the rehearsal and performance of improvisational theatre. To conduct the study, an original media server was developed for co-locating remote performers into shared virtual 3D environments which were accessed through popular video conferencing software. The results of this qualitative study indicate that tele-immersive environments uniquely provide performers with a strong sense of co- presence, feelings of physical connection, and an increased ability to enter the social-flow states required for improvisational theatre. Based on our observations, we put forward design recommendations for video collaboration tools tailored to the unique demands of live performance.
12
DistanciAR: Authoring Site-Specific Augmented Reality Experiences for Remote Environments
Zeyu Wang (Yale University, New Haven, Connecticut, United States)Cuong Nguyen (Adobe Research, San Francisco, California, United States)Paul Asente (Adobe, San Jose, California, United States)Julie Dorsey (Yale University, New Haven, Connecticut, United States)
Most augmented reality (AR) authoring tools only support the author's current environment, but designers often need to create site-specific experiences for a different environment. We propose DistanciAR, a novel tablet-based workflow for remote AR authoring. Our baseline solution involves three steps. A remote environment is captured by a camera with LiDAR; then, the author creates an AR experience from a different location using AR interactions; finally, a remote viewer consumes the AR content on site. A formative study revealed understanding and navigating the remote space as key challenges with this solution. We improved the authoring interface by adding two novel modes: Dollhouse, which renders a bird's-eye view, and Peek, which creates photorealistic composite images using captured images. A second study compared this improved system with the baseline, and participants reported that the new modes made it easier to understand and navigate the remote scene.
12
Quantitative Data Visualisation on Virtual Globes
Kadek Ananta. Satriadi (Monash University, Melbourne, Australia)Barrett Ens (Monash University, Melbourne, Australia)Tobias Czauderna (Monash University, Victoria, Australia)Maxime Cordeil (Monash University, Melbourne, Australia)Bernhard Jenny (Monash University, Melbourne, Australia)
Geographic data visualisation on virtual globes is intuitive and widespread, but has not been thoroughly investigated. We explore two main design factors for quantitative data visualisation on virtual globes: i)~commonly used primitives (\textit{2D bar}, \textit{3D bar}, \textit{circle}) and ii)~the orientation of these primitives (\textit{tangential}, \textit{normal}, \textit{billboarded)}. We evaluate five distinctive visualisation idioms in a user study with 50 participants. The results show that aligning primitives tangentially on the globe’s surface decreases the accuracy of area-proportional circle visualisations, while the orientation does not have a significant effect on the accuracy of length-proportional bar visualisations. We also find that tangential primitives induce higher perceived mental load than other orientations. Guided by these results we design a novel globe visualisation idiom, \textit{Geoburst}, that combines a virtual globe and a radial bar chart. A preliminary evaluation reports potential benefits and drawbacks of the \textit{Geoburst} visualisation.
12
The Role of Social Presence for Cooperation in Augmented Reality on Head Mounted Devices
Niklas Osmers (Clausthal University of Technology, Clausthal-Zellerfeld, Germany)Michael Prilla (Clausthal University of Technology, Clausthal-Zellerfeld, Germany)Oliver Blunk (Clausthal University of Technology, Clausthal-Zellerfeld, Germany)Gordon George. Brown (Clausthal University of Technology, Clausthal-Zellerfeld, Germany)Marc Janßen (Clausthal University of Technology, Clausthal-Zellerfeld, Germany)Nicolas Kahrl (Clausthal University of Technology, Clausthal-Zellerfeld, Germany)
With growing interest regarding cooperation support using Augmented Reality (AR), social presence has become a popular measure of its quality. While this concept is established throughout cooperation research, its role in AR is still unclear: Some work uses social presence as an indicator for support quality, while others found no impact at all. To clarify this role, we conducted a literature review of recent publications that empirically investigated social presence in cooperative AR. After a thorough selection procedure, we analyzed 19 publications according to factors influencing social presence and the impact of social presence on cooperation support. We found that certain interventions support social presence better than others, that social presence has an influence on user’s preferences and that the relation between social presence and cooperation quality may depend on the symmetry of the cooperation task. This contributes to existing research by clarifying the role of social presence for cooperative AR and deriving corresponding design recommendations.
12
Investigating the Homogenization of Web Design: A Mixed-Methods Approach
Samuel Goree (Indiana University, Bloomington, Indiana, United States)Bardia Doosti (Indiana University Bloomington, Bloomington, Indiana, United States)David Crandall (Indiana University, Bloomington, Indiana, United States)Norman Makoto. Su (Indiana University, Bloomington, Indiana, United States)
Visual design provides the backdrop to most of our interactions over the Internet, but has not received as much analytical attention as textual content. Combining computational with qualitative approaches, we investigate the growing concern that visual design of the World Wide Web has homogenized over the past decade. By applying computer vision techniques to a large data-set of representative websites images from 2003--2019, we show that designs have become significantly more similar since 2007, especially for page layouts where the average distance between sites decreased by over 30%. Synthesizing interviews from 11 experienced web design professionals with our computational analyses, we discuss causes of this homogenization including overlap in source code and libraries, color scheme standardization, and support for mobile devices. Our results seek to motivate future discussion of the factors that influence designers and their implications on the future trajectory of web design.
12
Scene-Aware Behavior Synthesis for Virtual Pets in Mixed Reality
Wei Liang (Beijing Institute of Technology, Beijing, China)Xinzhe Yu (Beijing Institute of Technology, Beijing, China)Rawan Alghofaili (George Mason University, Fairfax, Virginia, United States)Yining Lang (Alibaba Group, beijing, China)Lap-Fai Yu (George Mason University, Fairfax, Virginia, United States)
Virtual pets are an alternative to real pets, providing a substitute for people with allergies or preparing people for adopting a real pet. Recent advancements in mixed reality pave the way for virtual pets to provide a more natural and seamless experience for users. However, one key challenge is embedding environmental awareness into the virtual pet (e.g., identifying the food bowl's location) so that they can behave naturally in the real world. We propose a novel approach to synthesize virtual pet behaviors by considering scene semantics, enabling a virtual pet to behave naturally in mixed reality. Given a scene captured from the real world, our approach synthesizes a sequence of pet behaviors (e.g., resting after eating). Then, we assign each behavior in the sequence to a location in the real scene. We conducted user studies to evaluate our approach, which showed the efficacy of our approach in synthesizing natural virtual pet behaviors.
12
LightTouch Gadgets: Extending Interactions on Capacitive Touchscreens by Converting Light Emission to Touch Inputs
Kaori Ikematsu (Yahoo Japan Corporation, Tokyo, Japan)Kunihiro Kato (Tokyo University of Technology, Tokyo, Japan)Yoshihiro Kawahara (The University of Tokyo, Tokyo, Japan)
We present LightTouch, a 3D-printed passive gadget to enhance touch interactions on unmodified capacitive touchscreens. The LightTouch gadgets simulate finger operations such as tapping, swiping, and multi-touch gestures by means of conductive materials and light-dependent resistors (LDR) embedded in the object. The touchscreen emits visible light and the LDR senses the level of this light, which changes its resistance value. By controlling the screen brightness, it intentionally connects or disconnects the path between the GND and the touchscreen, thus allowing the touch inputs to be controlled. In contrast to conventional physical extensions for touchscreens, our technique requires neither continuous finger contact on the conductive part nor the use of batteries. As such, it opens up new possibilities for touchscreen interactions beyond the simple automation of touch inputs, such as establishing a communication channel between devices, enhancing the trackability of tangibles, and inter-application operations.
12
Can You Hear My Heartbeat?: Hearing an Expressive Biosignal Elicits Empathy
R. Michael Winters (Georgia Institute of Technology, Atlanta, Georgia, United States)Bruce N.. Walker (Georgia Institute of Technology , Atlanta, Georgia, United States)Grace Leslie (Georgia Tech, Atlanta, Georgia, United States)
Interfaces designed to elicit empathy provide an opportunity for HCI with important pro-social outcomes. Recent research has demonstrated that perceiving expressive biosignals can facilitate emotional understanding and connection with others, but this work has been largely limited to visual approaches. We propose that hearing these signals will also elicit empathy, and test this hypothesis with sounding heartbeats. In a lab-based within-subjects study, participants (N=27) completed an emotion recognition task in different heartbeat conditions. We found that hearing heartbeats changed participants’ emotional perspective and increased their reported ability to “feel what the other was feeling.” From these results, we argue that auditory heartbeats are well-suited as an empathic intervention, and might be particularly useful for certain groups and use-contexts because of its musical and non-visual nature. This work establishes a baseline for empathic auditory interfaces, and offers a method to evaluate the effects of future designs.
12
Interaction Illustration Taxonomy: Classification of Styles and Techniques for Visually Representing Interaction Scenarios
Axel Antoine (Univ. Lille, CNRS, Inria, Centrale Lille, UMR 9189 CRIStAL, Lille, France)Sylvain Malacria (Univ. Lille, CNRS, Inria, Centrale Lille, UMR 9189 CRIStAL, Lille, France)Nicolai Marquardt (University College London, London, United Kingdom)Géry Casiez (Univ. Lille, CNRS, Inria, Centrale Lille, UMR 9189 CRIStAL, Lille, France)
Static illustrations are ubiquitous means to represent interaction scenarios. Across papers and reports, these visuals demonstrate people's use of devices, explain systems, or show design spaces. Creating such figures is challenging, and very little is known about the overarching strategies for visually representing interaction scenarios. To mitigate this task, we contribute a unified taxonomy of design elements that compose such figures. In particular, we provide a detailed classification of Structural and Interaction strategies, such as composition, visual techniques, dynamics, representation of users, and many others -- all in context of the type of scenarios. This taxonomy can inform researchers' choices when creating new figures, by providing a concise synthesis of visual strategies, and revealing approaches they were not aware of before. Furthermore, to support the community for creating further taxonomies, we also provide three open-source software facilitating the coding process and visual exploration of the coding scheme.
11
MetaMap: Supporting Visual Metaphor Ideation through Multi-dimensional Example-based Exploration
Youwen Kang (Hong Kong University of Science and Technology, Hong Kong, Hong Kong, China)Zhida Sun (Hong Kong University of Science and Technology, Hong Kong, China)Sitong Wang (Columbia University, New York, New York, United States)Zeyu Huang (Hong Kong University of Science and Technology, Hong Kong, China)Ziming Wu (Hong Kong University of Science and Technology, Hong Kong, China)Xiaojuan Ma (Hong Kong University of Science and Technology, Hong Kong, Hong Kong)
Visual metaphors, which are widely used in graphic design, can deliver messages in creative ways by fusing different objects. The keys to creating visual metaphors are diverse exploration and creative combinations, which is challenging with conventional methods like image searching. To streamline this ideation process, we propose to use a mind-map-like structure to recommend and assist users to explore materials. We present MetaMap, a supporting tool which inspires visual metaphor ideation through multi-dimensional example-based exploration. To facilitate the divergence and convergence of the ideation process, MetaMap provides 1) sample images based on keyword association and color filtering; 2) example-based exploration in semantics, color, and shape dimensions; and 3) thinking path tracking and idea recording. We conduct a within-subject study with 24 design enthusiasts by taking a Pinterest-like interface as the baseline. Our evaluation results suggest that MetaMap provides an engaging ideation process and helps participants create diverse and creative ideas.
11
“Grip-that-there”: An Investigation of Explicit and Implicit Task Allocation Techniques for Human-Robot Collaboration
Karthik Mahadevan (University of Toronto, Toronto, Ontario, Canada)Mauricio Sousa (University of Toronto, Toronto, Ontario, Canada)Anthony Tang (University of Toronto, Toronto, Ontario, Canada)Tovi Grossman (University of Toronto, Toronto, Ontario, Canada)
In ad-hoc human-robot collaboration (HRC), humans and robots work on a task without pre-planning the robot's actions prior to execution; instead, task allocation occurs in real-time. However, prior research has largely focused on task allocations that are pre-planned - there has not been a comprehensive exploration or evaluation of techniques where task allocation is adjusted in real-time. Inspired by HCI research on territoriality and proxemics, we propose a design space of novel task allocation techniques including both explicit techniques, where the user maintains agency, and implicit techniques, where the efficiency of automation can be leveraged. The techniques were implemented and evaluated using a tabletop HRC simulation in VR. A 16-participant study, which presented variations of a collaborative block stacking task, showed that implicit techniques enable efficient task completion and task parallelization, and should be augmented with explicit mechanisms to provide users with fine-grained control.
11
Designing Telepresence Drones to Support Synchronous, Mid-air, Remote Collaboration: An Exploratory Study
Mehrnaz Sabet (Cornell University , Ithaca, New York, United States)Mania Orand (University of Washignton, Seattle, Washington, United States)David W.. McDonald (University of Washington, Seattle, Washington, United States)
Drones are increasingly used to support humanitarian crises and events that involve dangerous or costly tasks. While drones have great potential for remote collaborative work and aerial telepresence, existing drone technology is limited in its support for synchronous collaboration among multiple remote users. Through three design iterations and evaluations, we prototyped Squadrone, a novel aerial telepresence platform that supports synchronous mid-air collaboration among multiple remote users. We present our design and report results from evaluating our iterations with 13 participants in 3 different collaboration configurations. Our first design iteration validates the basic functionality of the platform. Then, we establish the effectiveness of collaboration using a 360-degree shared aerial display. Finally, we simulate a type of search task in an open environment to see if collaborative telepresence impacts members’ participation. The results validate some initial goals for Squadrone and are used to reflect back on a recent telepresence design framework.
11
GamesBond: Bimanual Haptic Illusion of Physically Connected Objects for Immersive VR Using Grip Deformation
Neung Ryu (KAIST, Daejeon, Korea, Republic of)Hye-Young Jo (KAIST, Daejeon, Korea, Republic of)Michel Pahud (Microsoft Research, Redmond, Washington, United States)Mike Sinclair (Microsoft, Redmond, Washington, United States)Andrea Bianchi (KAIST, Daejeon, Korea, Republic of)
Virtual Reality experiences, such as games and simulations, typically support the usage of bimanual controllers to interact with virtual objects. To recreate the haptic sensation of holding objects of various shapes and behaviors with both hands, previous researchers have used mechanical linkages between the controllers that render adjustable stiffness. However, the linkage cannot quickly adapt to simulate dynamic objects, nor it can be removed to support free movements. This paper introduces GamesBond, a pair of 4-DoF controllers without physical linkage but capable to create the illusion of being connected as a single device, forming a virtual bond. The two controllers work together by dynamically displaying and physically rendering deformations of hand grips, and so allowing users to perceive a single connected object between the hands, such as a jumping rope. With a user study and various applications we show that GamesBond increases the realism, immersion, and enjoyment of bimanual interaction.
11
Flower Jelly Printer: Slit Injection Printing for Parametrically Designed Flower Jelly
Mako Miyatake (The University of Tokyo, Tokyo, Japan)Koya Narumi (The University of Tokyo, Tokyo, Japan)Yuji Sekiya (The University of Tokyo, Bunkyo-ku, Tokyo, Japan)Yoshihiro Kawahara (The university of Tokyo, Bunkyo-ku, Tokyo, Japan)
Flower jellies, a delicate dessert in which a flower-shaped jelly floats inside another clear jelly, fascinate people with both their beauty and elaborate construction. In efforts to simplify the challenging fabrication and enrich the design space of this dessert, we present Flower Jelly Printer: a printing device and design software for digitally fabricating flower jellies. Our design software lets users play with parameters and preview the resulting forms until achieving their desired shapes. We also developed slit injection printing that directly injects colored jelly into a base jelly, and shared several design examples to show the breadth of design possibilities. Finally, the user study with novice and experienced users demonstrates that our system benefits creators of all experience levels by iterative design and precise fabrication. We hope to enable more people to design and create their own flower jellies while expanding access and the design space for digitally fabricated foods.