注目の論文一覧

各カテゴリ上位30論文までを表示しています

The ACM CHI Conference on Human Factors in Computing Systems (https://chi2021.acm.org/)

12
XRgonomics: Facilitating the Creation of Ergonomic 3D Interfaces
João Marcelo. Evangelista Belo (Aarhus University, Aarhus, Denmark)Anna Maria. Feit (ETH Zurich, Zurich, Switzerland)Tiare Feuchtner (Aarhus University, Aarhus, Denmark)Kaj Grønbæk (Aarhus University, Aarhus, Denmark)
Arm discomfort is a common issue in Cross Reality applications involving prolonged mid-air interaction. Solving this problem is difficult because of the lack of tools and guidelines for 3D user interface design. Therefore, we propose a method to make existing ergonomic metrics available to creators during design by estimating the interaction cost at each reachable position in the user's environment. We present XRgonomics, a toolkit to visualize the interaction cost and make it available at runtime, allowing creators to identify UI positions that optimize users' comfort. Two scenarios show how the toolkit can support 3D UI design and dynamic adaptation of UIs based on spatial constraints. We present results from a walkthrough demonstration, which highlight the potential of XRgonomics to make ergonomics metrics accessible during the design and development of 3D UIs. Finally, we discuss how the toolkit may address design goals beyond ergonomics.
11
Reading in VR: The Effect of Text Presentation Type and Location
Rufat Rzayev (University of Regensburg, Regensburg, Germany)Polina Ugnivenko (University of Regensburg, Regensburg, Germany)Sarah Graf (University of Regensburg, Regensburg, Germany)Valentin Schwind (Frankfurt University of Applied Sciences, Frankfurt, Germany)Niels Henze (University of Regensburg, Regensburg, Germany)
Reading is a fundamental activity to obtain information both in the real and the digital world. Virtual reality (VR) allows novel approaches for users to view, read, and interact with a text. However, for efficient reading, it is necessary to understand how a text should be displayed in VR without impairing the VR experience. Therefore, we conducted a study with 18 participants to investigate text presentation type and location in VR. We compared world-fixed, edge-fixed, and head-fixed text locations. Texts were displayed using Rapid Serial Visual Presentation (RSVP) or as a paragraph. We found that RSVP is a promising presentation type for reading short texts displayed in edge-fixed or head-fixed location in VR. The paragraph presentation type using world-fixed or edge-fixed location is promising for reading long text if movement in the virtual environment is not required. Insights from our study inform the design of reading interfaces for VR applications.
11
Proxemics and Social Interactions in an Instrumented Virtual Reality Workshop
Julie R.. Williamson (University of Glasgow, Glasgow, United Kingdom)Jie Li (Centrum Wiskunde & Informatica, Amsterdam, Netherlands)David A.. Shamma (Centrum Wiskunde & Informatica, Amsterdam, Netherlands)Vinoba Vinayagamoorthy (BBC Research & Development, London, United Kingdom)Pablo Cesar (CWI, Amsterdam, Netherlands)
Virtual environments (VEs) can create collaborative and social spaces, which are increasingly important in the face of remote work and travel reduction. Recent advances, such as more open and widely available platforms, create new possibilities to observe and analyse interaction in VEs. Using a custom instrumented build of Mozilla Hubs to measure position and orientation, we conducted an academic workshop to facilitate a range of typical workshop activities. We analysed social interactions during a keynote, small group breakouts, and informal networking/hallway conversations. Our mixed-methods approach combined environment logging, observations, and semi-structured interviews. The results demonstrate how small and large spaces influenced group formation, shared attention, and personal space, where smaller rooms facilitated more cohesive groups while larger rooms made small group formation challenging but personal space more flexible. Beyond our findings, we show how the combination of data and insights can fuel collaborative spaces' design and deliver more effective virtual workshops.
11
Gaze-Supported 3D Object Manipulation in Virtual Reality
Difeng Yu (The University of Melbourne, Melbourne, VIC, Australia)Xueshi Lu (Xi'an Jiaotong-Liverpool University, Suzhou, Jiangsu, China)Rongkai Shi (Xi'an Jiaotong-Liverpool University, Suzhou, Jiangsu, China)Hai-Ning Liang (Xi'an Jiaotong-Liverpool University, Suzhou, Jiangsu, China)Tilman Dingler (The University of Melbourne, Melbourne, VIC, Australia)Eduardo Velloso (The University of Melbourne, Melbourne, VIC, Australia)Jorge Goncalves (The University of Melbourne, Melbourne, VIC, Australia)
This paper investigates integration, coordination, and transition strategies of gaze and hand input for 3D object manipulation in VR. Specifically, this work aims to understand whether incorporating gaze input can benefit VR object manipulation tasks, and how it should be combined with hand input for improved usability and efficiency. We designed four gaze-supported techniques that leverage different combination strategies for object manipulation and evaluated them in two user studies. Overall, we show that gaze did not offer significant performance benefits for transforming objects in the primary working space, where all objects were located in front of the user and within the arm-reach distance, but can be useful for a larger environment with distant targets. We further offer insights regarding combination strategies of gaze and hand input, and derive implications that can help guide the design of future VR systems that incorporate gaze input for 3D object manipulation.
10
Towards an Understanding of Situated AR Visualization for Basketball Free-Throw Training
Tica Lin (Harvard University, Cambridge, Massachusetts, United States)Rishi Singh (Harvard University, Cambridge, Massachusetts, United States)Yalong Yang (Harvard University, Cambridge, Massachusetts, United States)Carolina Nobre (Harvard University, Cambridge, Massachusetts, United States)Johanna Beyer (Harvard University, Cambridge, Massachusetts, United States)Maurice Smith (Harvard University, Cambridge, Massachusetts, United States)Hanspeter Pfister (Harvard University, Cambridge, Massachusetts, United States)
We present an observational study to compare co-located and situated real-time visualizations in basketball free-throw training. Our goal is to understand the advantages and concerns of applying immersive visualization to real-world skill-based sports training and to provide insights for designing AR sports training systems. We design both a situated 3D visualization on a head-mounted display and a 2D visualization on a co-located display to provide immediate visual feedback on a player's shot performance. Using a within-subject study design with experienced basketball shooters, we characterize user goals, report on qualitative training experiences, and compare the quantitative training results. Our results show that real-time visual feedback helps athletes refine subsequent shots. Shooters in our study achieve greater angle consistency with our visual feedback. Furthermore, AR visualization promotes an increased focus on body form in athletes. Finally, we present suggestions for the design of future sports AR studies.
10
MARVIS: Combining Mobile Devices and Augmented Reality for Visual Data Analysis
Ricardo Langner (Technische Universität Dresden, Dresden, Germany)Marc Satkowski (Technische Universität Dresden, Dresden, Germany)Wolfgang Büschel (Technische Universität Dresden, Dresden, Germany)Raimund Dachselt (Technische Universität Dresden, Dresden, Germany)
We present MARVIS, a conceptual framework that combines mobile devices and head-mounted Augmented Reality (AR) for visual data analysis. We propose novel concepts and techniques addressing visualization-specific challenges. By showing additional 2D and 3D information around and above displays, we extend their limited screen space. AR views between displays as well as linking and brushing are also supported, making relationships between separated visualizations plausible. We introduce the design process and rationale for our techniques. To validate MARVIS' concepts and show their versatility and widespread applicability, we describe six implemented example use cases. Finally, we discuss insights from expert hands-on reviews. As a result, we contribute to a better understanding of how the combination of one or more mobile devices with AR can benefit visual data analysis. By exploring this new type of visualization environment, we hope to provide a foundation and inspiration for future mobile data visualizations.
10
Grand Challenges in Immersive Analytics
Barrett Ens (Monash University, Melbourne, Australia)Benjamin Bach (Edinburgh University, Edinburgh, United Kingdom)Maxime Cordeil (Monash University, Melbourne, Australia)Ulrich Engelke (CSIRO, Kensington, WA, Australia)Marcos Serrano (IRIT - Elipse, Toulouse, France)Wesley Willett (University of Calgary, Calgary, Alberta, Canada)Arnaud Prouzeau (Monash University, Melbourne, Australia)Christoph Anthes (University of Applied Sciences Upper Austria, Hagenberg, Austria)Wolfgang Büschel (Technische Universität Dresden, Dresden, Germany)Cody Dunne (Northeastern University, Boston, Massachusetts, United States)Tim Dwyer (Monash University, Melbourne, Australia)Jens Grubert (Coburg University, Coburg, Bavaria, Germany)Jason Haga (AIST, Tsukuba, Ibaraki, Japan)Nurit Kirshenbaum (University of Hawaii at Manoa, Honolulu, Hawaii, United States)Dylan Kobayashi (University of Hawaiʻi at Mānoa, Honolulu, Hawaii, United States)Tica Lin (Harvard University, Cambridge, Massachusetts, United States)Monsurat Olaosebikan (Tufts University, Medford, Massachusetts, United States)Fabian Pointecker (University of Applied Sciences Upper Austria, Hagenberg, Austria)David Saffo (Northeastern University, Boston, Massachusetts, United States)Nazmus Saquib (MIT, Cambridge, Massachusetts, United States)Dieter Schmalstieg (Graz University of Technology, Graz, Austria)Danielle Albers. Szafir (University of Colorado Boulder, Boulder, Colorado, United States)Matt Whitlock (University of Colorado, Boulder, Colorado, United States)Yalong Yang (Harvard University, Cambridge, Massachusetts, United States)
Immersive Analytics is a quickly evolving field that unites several areas such as visualisation, immersive environments, and human-computer interaction to support human data analysis with emerging technologies. This research has thrived over the past years with multiple workshops, seminars, and a growing body of publications, spanning several conferences. Given the rapid advancement of interaction technologies and novel application domains, this paper aims toward a broader research agenda to enable widespread adoption. We present 17 key research challenges developed over multiple sessions by a diverse group of 24 international experts, initiated from a virtual scientific workshop at ACM CHI 2020. These challenges aim to coordinate future work by providing a systematic roadmap of current directions and impending hurdles to facilitate productive and effective applications for Immersive Analytics.
10
Touch&Fold: A Foldable Haptic Actuator for Rendering Touch in Mixed Reality
Shan-Yuan Teng (University of Chicago, Chicago, Illinois, United States)Pengyu Li (University of Chicago, Chicago, Illinois, United States)Romain Nith (University of Chicago, Chicago, Illinois, United States)Joshua Fonseca (University of Chicago, Chicago, Illinois, United States)Pedro Lopes (University of Chicago, Chicago, Illinois, United States)
We propose a nail-mounted foldable haptic device that provides tactile feedback to mixed reality (MR) environments by pressing against the user’s fingerpad when a user touches a virtual object. What is novel in our device is that it quickly tucks away when the user interacts with real-world objects. Its design allows it to fold back on top of the user’s nail when not in use, keeping the user’s fingerpad free to, for instance, manipulate handheld tools and other objects while in MR. To achieve this, we engineered a wireless and self-contained haptic device, which measures 24×24×41 mm and weighs 9.5 g. Furthermore, our foldable end-effector also features a linear resonant actuator, allowing it to render not only touch contacts (i.e., pressure) but also textures (i.e., vibrations). We demonstrate how our device renders contacts with MR surfaces, buttons, low- and high-frequency textures. In our first user study, we found that participants perceived our device to be more realistic than a previous haptic device that also leaves the fingerpad free (i.e., fingernail vibration). In our second user study, we investigated the participants’ experience while using our device in a real-world task that involved physical objects. We found that our device allowed participants to use the same finger to manipulate handheld tools, small objects, and even feel textures and liquids, without much hindrance to their dexterity, while feeling haptic feedback when touching MR interfaces.
9
Radi-Eye: Hands-free Radial Interfaces for 3D Interaction using Gaze-activated Head-crossing
Ludwig Sidenmark (Lancaster University, Lancaster, United Kingdom)Dominic Potts (Lancaster University, Lancaster, Lancashire, United Kingdom)Bill Bapisch (Ludwig-Maximilians-Universität, Munich, Germany)Hans Gellersen (Aarhus University, Aarhus, Denmark)
Eye gaze and head movement are attractive for hands-free 3D interaction in head-mounted displays, but existing interfaces afford only limited control. Radi-Eye is a novel pop-up radial interface designed to maximise expressiveness with input from only the eyes and head. Radi-Eye provides widgets for discrete and continuous input and scales to support larger feature sets. Widgets can be selected with Look & Cross, using gaze for pre-selection followed by head-crossing as trigger and for manipulation. The technique leverages natural eye-head coordination where eye and head move at an offset unless explicitly brought into alignment, enabling interaction without risk of unintended input. We explore Radi-Eye in three augmented and virtual reality applications, and evaluate the effect of radial interface scale and orientation on performance with Look & Cross. The results show that Radi-Eye provides users with fast and accurate input while opening up a new design space for hands-free fluid interaction.
9
AffectiveSpotlight: Facilitating the Communication of Affective Responses from Audience Members during Online Presentations
Prasanth Murali (Northeastern University, Boston, Massachusetts, United States)Javier Hernandez (Microsoft Research, Cambridge, Massachusetts, United States)Daniel McDuff (Microsoft Research, Cambridge, Massachusetts, United States)Kael Rowan (Microsoft Research, Redmond, Washington, United States)Jina Suh (Microsoft Research, Redmond, Washington, United States)Mary Czerwinski (Microsoft Research, Redmond, Washington, United States)
The ability to monitor audience reactions is critical when delivering presentations. However, current videoconferencing platforms offer limited solutions to support this. This work leverages recent advances in affect sensing to capture and facilitate communication of relevant audience signals. Using an exploratory survey (N=175), we assessed the most relevant audience responses such as confusion, engagement, and head-nods. We then implemented AffectiveSpotlight, a Microsoft Teams bot that analyzes facial responses and head gestures of audience members and dynamically spotlights the most expressive ones. In a within-subjects study with 14~groups (N=117), we observed that the system made presenters significantly more aware of their audience, speak for a longer period of time, and self-assess the quality of their talk more similarly to the audience members, compared to two control conditions (randomly-selected spotlight and default platform UI). We provide design recommendations for future affective interfaces for online presentations based on feedback from the study.
9
ThermoCaress: A Wearable Haptic Device with Illusory Moving Thermal Stimulation
Yuhu Liu (The University of Tokyo, Tokyo, Japan)Satoshi Nishikawa (The University of Tokyo, Tokyo, Japan)Young ah Seong (Hosei University, Tokyo, Japan)Ryuma Niiyama (The University of Tokyo, Tokyo, Japan)Yasuo Kuniyoshi (The University of Tokyo, Tokyo, Japan)
We propose ThermoCaress, a haptic device to create a stroking sensation on the forearm using pressure force and present thermal feedback simultaneously. In our method, based on the phenomenon of thermal referral, by overlapping a stroke of pressure force, users feel as if the thermal stimulation moves although the position of temperature source is static. We designed the device to be compact and soft, using microblowers and inflatable pouches for presenting pressure force and water for presenting thermal feedback. Our user study showed that the device succeeded in generating thermal referrals and creating a moving thermal illusion. The results also suggested that cold temperature enhance the pleasantness of stroking. Our findings contribute to expanding the potential of thermal haptic devices.
9
MIRIA: A Mixed Reality Toolkit for the In-Situ Visualization and Analysis of Spatio-Temporal Interaction Data
Wolfgang Büschel (Technische Universität Dresden, Dresden, Germany)Anke Lehmann (Technische Universität Dresden, Dresden, Germany)Raimund Dachselt (Technische Universität Dresden, Dresden, Germany)
In this paper, we present MIRIA, a Mixed Reality Interaction Analysis toolkit designed to support the in-situ visual analysis of user interaction in mixed reality and multi-display environments. So far, there are few options to effectively explore and analyze interaction patterns in such novel computing systems. With MIRIA, we address this gap by supporting the analysis of user movement, spatial interaction, and event data by multiple, co-located users directly in the original environment. Based on our own experiences and an analysis of the typical data, tasks, and visualizations used in existing approaches, we identify requirements for our system. We report on the design and prototypical implementation of MIRIA, which is informed by these requirements and offers various visualizations such as 3D movement trajectories, position heatmaps, and scatterplots. To demonstrate the value of MIRIA for real-world analysis tasks, we conducted expert feedback sessions using several use cases with authentic study data.
9
MeetingCoach: An Intelligent Dashboard for Supporting Effective & Inclusive Meetings
Samiha Samrose (University of Rochester, Rochester, New York, United States)Daniel McDuff (Microsoft, Seattle, Washington, United States)Robert Sim (Microsoft, Redmond, Washington, United States)Jina Suh (Microsoft Research, Redmond, Washington, United States)Kael Rowan (Microsoft Research, Redmond, Washington, United States)Javier Hernandez (Microsoft Research, Cambridge, Massachusetts, United States)Sean Rintel (Microsoft Research, Cambridge, United Kingdom)Kevin Moynihan (Microsoft Research, Barcelona, Spain)Mary Czerwinski (Microsoft Research, Redmond, Washington, United States)
Video-conferencing is essential for many companies, but its limitations in conveying social cues can lead to ineffective meetings. We present MeetingCoach, an intelligent post-meeting feedback dashboard that summarizes contextual and behavioral meeting information. Through an exploratory survey (N=120), we identified important signals (e.g., turn taking, sentiment) and used these insights to create a wireframe dashboard. The design was evaluated with in situ participants (N=16) who helped identify the components they would prefer in a post-meeting dashboard. After recording video-conferencing meetings of eight teams over four weeks, we developed an AI system to quantify the meeting features and created personalized dashboards for each participant. Through interviews and surveys (N=23), we found that reviewing the dashboard helped improve attendees' awareness of meeting dynamics, with implications for improved effectiveness and inclusivity. Based on our findings, we provide suggestions for future feedback system designs of video-conferencing meetings.
9
Standardizing Participant Compensation Reporting in HCI: A Meta-Review and Recommendations for the Field
Jessica Pater (Parkview Health, Fort Wayne, Indiana, United States)Amanda Coupe (Parkview Health, Fort Wayne, Indiana, United States)Rachel Pfafman (Parkview Health, Fort Wayne, Indiana, United States)Chanda Phelan (University of Michigan, Ann Arbor, Michigan, United States)Tammy Toscos (Parkview Health, Fort Wayne, Indiana, United States)Maia Jacobs (Northwestern University, Evanston, Illinois, United States)
The user study is a fundamental method used in HCI. In designing user studies, we often use compensation strategies to incentivize recruitment. However, compensation can also lead to ethical issues, such as coercion. The CHI community has yet to establish best practices for participant compensation. Through a systematic review of manuscripts at CHI and other associated publication venues, we found high levels of variation in the compensation strategies used within the community and how we report on this aspect of the study methods. A qualitative analysis of justifications offered for compensation sheds light into how some researchers are currently contextualizing this practice. This paper provides a description of current compensation strategies and information that can inform the design of compensation strategies in future studies. The findings may be helpful to generate productive discourse in the HCI community towards the development of best practices for participant compensation in user studies.
9
Physiological and Perceptual Responses to Athletic Avatars while Cycling in Virtual Reality
Martin Kocur (University of Regensburg, Regensburg, Germany)Florian Habler (University of Regensburg, Regensburg, Germany)Valentin Schwind (Frankfurt University of Applied Sciences, Frankfurt, Germany)Paweł W. Woźniak (Utrecht University, Utrecht, Netherlands)Christian Wolff (University of Regensburg, Regensburg, Bavaria, Germany)Niels Henze (University of Regensburg, Regensburg, Germany)
Avatars in virtual reality (VR) enable embodied experiences and induce the Proteus effect - a shift in behavior and attitude to mimic one's digital representation. Previous work found that avatars associated with physical strength can decrease users' perceived exertion when performing physical tasks. However, it is unknown if an avatar's appearance can also influence the user's physiological response to exercises. Therefore, we conducted an experiment with 24 participants to investigate the effect of avatars' athleticism on heart rate and perceived exertion while cycling in VR following a standardized protocol. We found that the avatars' athleticism has a significant and systematic effect on users' heart rate and perceived exertion. We discuss potential moderators such as body ownership and users' level of fitness. Our work contributes to the emerging area of VR exercise systems.
9
Designing Telepresence Drones to Support Synchronous, Mid-air, Remote Collaboration: An Exploratory Study
Mehrnaz Sabet (Cornell University , Ithaca, New York, United States)Mania Orand (University of Washignton, Seattle, Washington, United States)David W.. McDonald (University of Washington, Seattle, Washington, United States)
Drones are increasingly used to support humanitarian crises and events that involve dangerous or costly tasks. While drones have great potential for remote collaborative work and aerial telepresence, existing drone technology is limited in its support for synchronous collaboration among multiple remote users. Through three design iterations and evaluations, we prototyped Squadrone, a novel aerial telepresence platform that supports synchronous mid-air collaboration among multiple remote users. We present our design and report results from evaluating our iterations with 13 participants in 3 different collaboration configurations. Our first design iteration validates the basic functionality of the platform. Then, we establish the effectiveness of collaboration using a 360-degree shared aerial display. Finally, we simulate a type of search task in an open environment to see if collaborative telepresence impacts members’ participation. The results validate some initial goals for Squadrone and are used to reflect back on a recent telepresence design framework.
9
STREAM: Exploring the Combination of Spatially-Aware Tablets with Augmented Reality Head-Mounted Displays for Immersive Analytics
Sebastian Hubenschmid (University of Konstanz, Konstanz, Germany)Johannes Zagermann (University of Konstanz, Konstanz, Germany)Simon Butscher (University of Konstanz, Konstanz, Germany)Harald Reiterer (University of Konstanz, Konstanz, Germany)
Recent research in the area of immersive analytics demonstrated the utility of head-mounted augmented reality devices for visual data analysis. However, it can be challenging to use the by default supported mid-air gestures to interact with visualizations in augmented reality (e.g. due to limited precision). Touch-based interaction (e.g. via mobile devices) can compensate for these drawbacks, but is limited to two-dimensional input. In this work we present STREAM: Spatially-aware Tablets combined with Augmented Reality Head-Mounted Displays for the multimodal interaction with 3D visualizations. We developed a novel eyes-free interaction concept for the seamless transition between the tablet and the augmented reality environment. A user study reveals that participants appreciated the novel interaction concept, indicating the potential for spatially-aware tablets in augmented reality. Based on our findings, we provide design insights to foster the application of spatially-aware touch devices in augmented reality and research implications indicating areas that need further investigation.
9
Sketchnote Components, Design Space Dimensions, and Strategies for Effective Visual Note Taking
Rebecca Zheng (University College London, London, United Kingdom)Marina Fernández Camporro (University College London, London, United Kingdom)Hugo Romat (ETH, Zurich, Switzerland)Nathalie Henry Riche (Microsoft Research, Redmond, Washington, United States)Benjamin Bach (Edinburgh University, Edinburgh, United Kingdom)Fanny Chevalier (University of Toronto, Toronto, Ontario, Canada)Ken Hinckley (Microsoft Research, Redmond, Washington, United States)Nicolai Marquardt (University College London, London, United Kingdom)
Sketchnoting is a form of visual note taking where people listen to, synthesize, and visualize ideas from a talk or other event using a combination of pictures, diagrams, and text. Little is known about the design space of this kind of visual note taking. With an eye towards informing the implementation of digital equivalents of sketchnoting, inking, and note taking, we introduce a classification of sketchnote styles and techniques, with a qualitative analysis of 103 sketchnotes, and situated in context with six semi-structured follow up interviews. Our findings distill core sketchnote components (content, layout, structuring elements, and visual styling) and dimensions of the sketchnote design space, classifying levels of conciseness, illustration, structure, personification, cohesion, and craftsmanship. We unpack strategies to address particular note taking challenges, for example dealing with constraints of live drawings, and discuss relevance for future digital inking tools, such as recomposition, styling, and design suggestions.
9
Towards “Avatar-Friendly” 3D Manipulation Techniques: Bridging the Gap Between Sense of Embodiment and Interaction in Virtual Reality
Diane Dewez (Inria, Rennes, France)Ludovic Hoyet (Inria, Rennes, France)Anatole Lécuyer (Inria, Rennes, France)Ferran Argelaguet Sanz (Inria, Rennes, France)
Avatars, the users' virtual representations, are becoming ubiquitous in virtual reality applications. In this context, the avatar becomes the medium which enables users to manipulate objects in the virtual environment. It also becomes the users' main spatial reference, which can not only alter their interaction with the virtual environment, but also the perception of themselves. In this paper, we review and analyse the current state-of-the-art for 3D object manipulation and the sense of embodiment. Our analysis is twofold. First, we discuss the impact that the avatar can have on object manipulation. Second, we discuss how the different components of a manipulation technique (i.e. input, control and feedback) can influence the user’s sense of embodiment. Throughout the analysis, we crystallise our discussion with practical guidelines for VR application designers and we propose several research topics towards ``avatar-friendly’’ manipulation techniques.
8
The Role of Social Presence for Cooperation in Augmented Reality on Head Mounted Devices
Niklas Osmers (Clausthal University of Technology, Clausthal-Zellerfeld, Germany)Michael Prilla (Clausthal University of Technology, Clausthal-Zellerfeld, Germany)Oliver Blunk (Clausthal University of Technology, Clausthal-Zellerfeld, Germany)Gordon George. Brown (Clausthal University of Technology, Clausthal-Zellerfeld, Germany)Marc Janßen (Clausthal University of Technology, Clausthal-Zellerfeld, Germany)Nicolas Kahrl (Clausthal University of Technology, Clausthal-Zellerfeld, Germany)
With growing interest regarding cooperation support using Augmented Reality (AR), social presence has become a popular measure of its quality. While this concept is established throughout cooperation research, its role in AR is still unclear: Some work uses social presence as an indicator for support quality, while others found no impact at all. To clarify this role, we conducted a literature review of recent publications that empirically investigated social presence in cooperative AR. After a thorough selection procedure, we analyzed 19 publications according to factors influencing social presence and the impact of social presence on cooperation support. We found that certain interventions support social presence better than others, that social presence has an influence on user’s preferences and that the relation between social presence and cooperation quality may depend on the symmetry of the cooperation task. This contributes to existing research by clarifying the role of social presence for cooperative AR and deriving corresponding design recommendations.
8
GuideCopter - A Precise Drone-Based Haptic Guidance Interface for Blind or Visually Impaired People
Felix Huppert (University of Passau, Passau, Bavaria, Germany)Gerold Hoelzl (University of Passau, Passau, Bavaria, Germany)Matthias Kranz (University of Passau, Passau, Bavaria, Germany)
Drone assisted navigation aids for supporting walking activities of visually impaired have been established in related work but fine-point object grasping tasks and the object localization in unknown environments still presents an open and complex challenge. We present a drone-based interface that provides fine-grain haptic feedback and thus physically guides them in hand-object localization tasks in unknown surroundings. Our research is built around community groups of blind or visually impaired (BVI) people, which provide in-depth insights during the development process and serve later as study participants. A pilot study infers users' sensibility to applied guiding stimuli forces and the different human-drone tether interfacing possibilities. In a comparative follow-up study, we show that our drone-based approach achieves greater accuracy compared to a current audio-based hand guiding system and delivers overall a more intuitive and relatable fine-point guiding experience.
8
Oh, Snap! A Fabrication Pipeline to Magnetically Connect Conventional and 3D-Printed Electronics
Martin Schmitz (Technical University of Darmstadt, Darmstadt, Germany)Jan Riemann (Technical University of Darmstadt, Darmstadt, Germany)Florian Müller (TU Darmstadt, Darmstadt, Germany)Steffen Kreis (TU Darmstadt, Darmstadt, Germany)Max Mühlhäuser (TU Darmstadt, Darmstadt, Germany)
3D printing has revolutionized rapid prototyping by speeding up the creation of custom-shaped objects. With the rise of multi-material 3Dprinters, these custom-shaped objects can now be made interactive in a single pass through passive conductive structures. However, connecting conventional electronics to these conductive structures often still requires time-consuming manual assembly involving many wires, soldering or gluing. To alleviate these shortcomings, we propose Oh, Snap!: a fabrication pipeline and interfacing concept to magnetically connect a 3D-printed object equipped with passive sensing structures to conventional sensing electronics. To this end, Oh, Snap! utilizes ferromagnetic and conductive 3D-printed structures, printable in a single pass on standard printers. We further present a proof-of-concept capacitive sensing board that enables easy and robust magnetic assembly to quickly create interactive 3D-printed objects. We evaluate Oh, Snap! by assessing the robustness and quality of the connection and demonstrate its broad applicability by a series of example applications.
8
Bad Breakdowns, Useful Seams, and Face Slapping: Analysis of VR Fails on YouTube
Emily Dao (Monash University, Melbourne, Victoria, Australia)Andreea Muresan (University of Copenhagen, Copenhagen, Denmark)Kasper Hornbæk (University of Copenhagen, Copenhagen, Denmark)Jarrod Knibbe (University of Melbourne, Melbourne, Australia)
Virtual reality (VR) is increasingly used in complex social and physical settings outside of the lab. However, not much is known about how these settings influence use, nor how to design for them. We analyse 233 YouTube videos of VR Fails to: (1) understand when breakdowns occur, and (2) reveal how the seams between VR use and the social and physical setting emerge. The videos show a variety of fails, including users flailing, colliding with surroundings, and hitting spectators. They also suggest causes of the fails, including fear, sensorimotor mismatches, and spectator participation. We use the videos as inspiration to generate design ideas. For example, we discuss more flexible boundaries between the real and virtual world, ways of involving spectators, and interaction designs to help overcome fear. Based on the findings, we further discuss the ‘moment of breakdown’ as an opportunity for designing engaging and enhanced VR experiences.
8
Teardrop Glasses: Pseudo Tears Induce Sadness in You and Those Around You
Shigeo Yoshida (The University of Tokyo, Tokyo, Japan)Takuji Narumi (the University of Tokyo, Tokyo, Japan)Tomohiro Tanikawa (the University of Tokyo, Tokyo, Japan)Hideaki Kuzuoka (The University of Tokyo, Bunkyo-ku, Tokyo, Japan)Michitaka Hirose (The University of Tokyo, Tokyo, Japan)
Emotional contagion is a phenomenon in which one's emotions are transmitted among individuals unconsciously by observing others' emotional expressions. In this paper, we propose a method for mediating people's emotions by triggering emotional contagion through artificial bodily changes such as pseudo tears. We focused on shedding tears because of the link to several emotions besides sadness. In addition, it is expected that shedding tears would induce emotional contagion because it is observable by others. We designed an eyeglasses-style wearable device, Teardrop glasses, that release water drops near the wearer's eyes. The drops flow down the cheeks and emulate real tears. The study revealed that artificial crying with pseudo tears increased sadness among both wearers and those observing them. Moreover, artificial crying attenuated happiness and positive feelings in observers. Our findings show that actual bodily changes are not necessary for inducing emotional contagion as artificial bodily changes are also sufficient.
8
Pose-on-the-Go: Approximating User Pose with Smartphone Sensor Fusion and Inverse Kinematics
Karan Ahuja (Carnegie Mellon University, Pittsburgh, Pennsylvania, United States)Sven Mayer (Carnegie Mellon University, Pittsburgh, Pennsylvania, United States)Mayank Goel (Carnegie Mellon University, Pittsburgh, Pennsylvania, United States)Chris Harrison (Carnegie Mellon University, Pittsburgh, Pennsylvania, United States)
We present Pose-on-the-Go, a full-body pose estimation system that uses sensors already found in today’s smartphones. This stands in contrast to prior systems, which require worn or external sensors. We achieve this result via extensive sensor fusion, leveraging a phone’s front and rear cameras, the user-facing depth camera, touchscreen, and IMU. Even still, we are missing data about a user’s body (e.g., angle of the elbow joint), and so we use inverse kinematics to estimate and animate probable body poses. We provide a detailed evaluation of our system, benchmarking it against a professional-grade Vicon tracking system. We conclude with a series of demonstration applications that underscore the unique potential of our approach, which could be enabled on many modern smartphones with a simple software update.
8
IdeaBot: Investigating Social Facilitation in Human-Machine Team Creativity
Angel Hsing-Chi Hwang (Cornell University, Ithaca, New York, United States)Andrea Stevenson Won (Cornell University, Ithaca, New York, United States)
The present study investigates how human subjects collaborate with a computer-mediated chatbot in creative idea generation tasks. In three text-based between-group studies, we tested whether the perceived identity (i.e.,whether the bot is perceived as a machine or as a human) or the conversational style of a teammate would moderate the outcomes of participants’ creative production. In Study 1, participants worked with either a chatbot or a human confederate. In Study 2, all participants worked with a human teammate but were informed that their partner was either a human or a chatbot. Conversely, all participants worked with a chatbot in Study 3, but were told the identity of their partner was either a chatbot or a human. We investigated differences in idea generation outcomes and found that participants consistently contributed more ideas and with ideas of higher quality when they perceived their teamworking partner as a bot. Furthermore, when the conversational style of the partner was robotic, participants with high anxiety in group communication reported greater creative self-efficacy in task performance. Finally, whether the perceived dominance of a partner and the pressure to come up with ideas during the task mediated positive outcomes of idea generation also depends on whether the conversational style of the bot partner was robot- or human-like. Based on our findings, we discussed implications for future design of artificial agents as active team players in collaboration tasks.
8
Visuo-haptic Illusions for Linear Translation and Stretching using Physical Proxies in Virtual Reality
Martin Feick (Saarland Informatics Campus, Saarbrücken, Germany)Niko Kleer (Saarland Informatics Campus, Saarbrücken, Germany)André Zenner (Saarland University, Saarland Informatics Campus, Saarbrücken, Germany)Anthony Tang (University of Toronto, Toronto, Ontario, Canada)Antonio Krüger (DFKI, Saarland Informatics Campus, Saarbrücken, Germany)
Providing haptic feedback when manipulating virtual objects is an essential part of immersive virtual reality experiences; however, it is challenging to replicate all of an object’s properties and characteristics. We propose the use of visuo-haptic illusions alongside physical proxies to enhance the scope of proxy-based interactions with virtual objects. In this work, we focus on two manipulation techniques, linear translation and stretching across different distances, and investigate how much discrepancy between the physical proxy and the virtual object may be introduced without participants noticing. In a study with 24 participants, we found that manipulation technique and travel distance significantly affect the detection thresholds, and that visuo-haptic illusions impact performance and accuracy. We show that this technique can be used to enable functional proxy objects that act as stand-ins for multiple virtual objects, illustrating the technique through a showcase VR-DJ application.
8
Large Scale Analysis of Multitasking Behavior During Remote Meetings
Hancheng Cao (Stanford University, Stanford, California, United States)Chia-Jung Lee (Amazon, Seattle, Washington, United States)Shamsi Iqbal (Microsoft Research, Redmond, Washington, United States)Mary Czerwinski (Microsoft Research, Redmond, Washington, United States)Priscilla N Y. Wong (UCL Interaction Centre, London, United Kingdom)Sean Rintel (Microsoft Research, Cambridge, United Kingdom)Brent Hecht (Microsoft, Redmond, Washington, United States)Jaime Teevan (Microsoft, Redmond, Washington, United States)Longqi Yang (Microsoft, Redmond, Washington, United States)
Virtual meetings are critical for remote work because of the need for synchronous collaboration in the absence of in-person interactions. In-meeting multitasking is closely linked to people's productivity and wellbeing. However, we currently have limited understanding of multitasking in remote meetings and its potential impact. In this paper, we present what we believe is the most comprehensive study of remote meeting multitasking behavior through an analysis of a large-scale telemetry dataset collected from February to May 2020 of U.S. Microsoft employees and a 715-person diary study. Our results demonstrate that intrinsic meeting characteristics such as size, length, time, and type, significantly correlate with the extent to which people multitask, and multitasking can lead to both positive and negative outcomes. Our findings suggest important best-practice guidelines for remote meetings (e.g., avoid important meetings in the morning) and design implications for productivity tools (e.g., support positive remote multitasking).
8
Increasing Electrical Muscle Stimulation’s Dexterity by means of Back of the Hand Actuation
Akifumi Takahashi (University of Chicago, Chicago, Illinois, United States)Jas Brooks (University of Chicago, Chicago, Illinois, United States)Hiroyuki Kajimoto (The University of Electro-Communications, Chofu, Tokyo, Japan)Pedro Lopes (University of Chicago, Chicago, Illinois, United States)
We propose a technique that allows an unprecedented level of dexterity in electrical muscle stimulation (EMS), i.e., it allows interactive EMS-based devices to flex the user’s fingers independently of each other. EMS is a promising technique for force feedback because of its small form factor when compared to mechanical actuators. However, the current EMS approach to flexing the user’s fingers (i.e., attaching electrodes to the base of the forearm, where finger muscles anchor) is limited by its inability to flex a target finger’s metacarpophalangeal (MCP) joint independently of the other fingers. In other words, current EMS devices cannot flex one finger alone, they always induce unwanted actuation to adjacent fingers. To tackle the lack of dexterity, we propose and validate a new electrode layout that places the electrodes on the back of the hand, where they stimulate the interossei/lumbricals muscles in the palm, which have never received attention with regards to EMS. In our user study, we found that our technique offers four key benefits when compared to existing EMS electrode layouts: our technique (1) flexes all four fingers around the MCP joint more independently; (2) has less unwanted flexion of other joints (such as the proximal interphalangeal joint); (3) is more robust to wrist rotations; and (4) reduces calibration time. Therefore, our EMS technique enables applications for interactive EMS systems that require a level of flexion dexterity not available until now. We demonstrate the improved dexterity with four example applications: three musical instrumental tutorials (piano, drum, and guitar) and a VR application that renders force feedback in individual fingers while manipulating a yo-yo.
7
Tea, Earl Grey, Hot: Designing Speech Interactions from the Imagined Ideal of Star Trek
Benett Axtell (University of Toronto, Toronto, Ontario, Canada)Cosmin Munteanu (University of Toronto Mississauga, Mississauga, Ontario, Canada)
Speech is now common in daily interactions with our devices, thanks to voice user interfaces (VUIs) like Alexa. Despite their seeming ubiquity, designs often do not match users’ expectations. Science fiction, which is known to influence design of new technologies, has included VUIs for decades. Star Trek: The Next Generation is a prime example of how people envisioned ideal VUIs. Understanding how current VUIs live up to Star Trek’s utopian technologies reveals mismatches between current designs and user expectations, as informed by popular fiction. Combining conversational analysis and VUI user analysis, we study voice interactions with the Enterprise’s computer and compare them to current interactions. Independent of futuristic computing power, we find key design-based differences: Star Trek interactions are brief and functional, not conversational, they are highly multimodal and context-driven, and there is often no spoken computer response. From this, we suggest paths to better align VUIs with user expectations.
7
TexYZ: Embroidering Enameled Wires for Three Degree-of-Freedom Mutual Capacitive Sensing
Roland Aigner (University of Applied Sciences Upper Austria, Hagenberg, Austria)Andreas Pointner (University of Applied Sciences Upper Austria, Hagenberg, Austria)Thomas Preindl (University of Applied Sciences Upper Austria, Hagenberg, Austria)Rainer Danner (University of Applied Sciences Upper Austria, Hagenberg, Austria)Michael Haller (University of Applied Sciences Upper Austria, Hagenberg, Austria)
In this paper, we present TexYZ, a method for rapid and effortless manufacturing of textile mutual capacitive sensors using a commodity embroidery machine. We use enameled wire as a bobbin thread to yield textile capacitors with high quality and consistency. As a consequence, we are able to leverage the precision and expressiveness of projected mutual capacitance for textile electronics, even when size is limited. Harnessing the assets of machine embroidery, we implement and analyze five distinct electrode patterns, examine the resulting electrical features with respect to geometrical attributes, and demonstrate the feasibility of two promising candidates for small-scale matrix layouts. The resulting sensor patches are further evaluated in terms of capacitance homogeneity, signal-to-noise ratio, sensing range, and washability. Finally, we demonstrate two use case scenarios, primarily focusing on continuous input with up to three degrees-of-freedom.
7
“Grip-that-there”: An Investigation of Explicit and Implicit Task Allocation Techniques for Human-Robot Collaboration
Karthik Mahadevan (University of Toronto, Toronto, Ontario, Canada)Mauricio Sousa (University of Toronto, Toronto, Ontario, Canada)Anthony Tang (University of Toronto, Toronto, Ontario, Canada)Tovi Grossman (University of Toronto, Toronto, Ontario, Canada)
In ad-hoc human-robot collaboration (HRC), humans and robots work on a task without pre-planning the robot's actions prior to execution; instead, task allocation occurs in real-time. However, prior research has largely focused on task allocations that are pre-planned - there has not been a comprehensive exploration or evaluation of techniques where task allocation is adjusted in real-time. Inspired by HCI research on territoriality and proxemics, we propose a design space of novel task allocation techniques including both explicit techniques, where the user maintains agency, and implicit techniques, where the efficiency of automation can be leveraged. The techniques were implemented and evaluated using a tabletop HRC simulation in VR. A 16-participant study, which presented variations of a collaborative block stacking task, showed that implicit techniques enable efficient task completion and task parallelization, and should be augmented with explicit mechanisms to provide users with fine-grained control.
7
Phonetroller: Visual Representations of Fingers for Precise Touch Input when using a Phone in VR
Fabrice Matulic (Preferred Networks Inc., Tokyo, Japan)Aditya Ganeshan (Preferred Networks Inc., Tokyo, Japan)Hiroshi Fujiwara (Preferred Networks Inc., Tokyo, Japan)Daniel Vogel (University of Waterloo, Waterloo, Ontario, Canada)
Smartphone touch screens are potentially attractive for interaction in virtual reality (VR). However, the user cannot see the phone or their hands in a fully immersive VR setting, impeding their ability for precise touch input. We propose mounting a mirror above the phone screen such that the front-facing camera captures the thumbs on or near the screen. This enables the creation of semi-transparent overlays of thumb shadows and inference of fingertip hover points with deep learning, which help the user aim for targets on the phone. A study compares the effect of visual feedback on touch precision in a controlled task and qualitatively evaluates three example applications demonstrating the potential of the technique. The results show that the enabled style of feedback is effective for thumb-size targets, and that the VR experience can be enriched by using smartphones as VR controllers supporting precise touch input.
7
Data-Driven Mark Orientation for Trend Estimation in Scatterplots
Tingting Liu (School of Computer Science, Qingdao, Shandong, China)Xiaotong Li (School of Computer Science, Qingdao, Shandong, China)Chen Bao (Shandong University, Qingdao, Shandong, China)Michael Correll (Tableau Software, Seattle, Washington, United States)Changehe Tu (Shandong Univ., Qingdao, China)Oliver Deussen (University of Konstanz, Konstanz, Germany)Yunhai Wang (Shandong University, Qingdao, China)
A common task for scatterplots is communicating trends in bivariate data. However, the ability of people to visually estimate these trends is under-explored, especially when the data violate assumptions required for common statistical models, or visual trend estimates are in conflict with statistical ones. In such cases, designers may need to intervene and de-bias these estimations, or otherwise inform viewers about differences between statistical and visual trend estimations. We propose data-driven mark orientation as a solution in such cases, where the directionality of marks in the scatterplot guide participants when visual estimation is otherwise unclear or ambiguous. Through a set of laboratory studies, we investigate trend estimation across a variety of data distributions and mark directionalities, and find that data-driven mark orientation can help resolve ambiguities in visual trend estimates.
7
Interaction Illustration Taxonomy: Classification of Styles and Techniques for Visually Representing Interaction Scenarios
Axel Antoine (Univ. Lille, CNRS, Inria, Centrale Lille, UMR 9189 CRIStAL, Lille, France)Sylvain Malacria (Univ. Lille, CNRS, Inria, Centrale Lille, UMR 9189 CRIStAL, Lille, France)Nicolai Marquardt (University College London, London, United Kingdom)Géry Casiez (Univ. Lille, CNRS, Inria, Centrale Lille, UMR 9189 CRIStAL, Lille, France)
Static illustrations are ubiquitous means to represent interaction scenarios. Across papers and reports, these visuals demonstrate people's use of devices, explain systems, or show design spaces. Creating such figures is challenging, and very little is known about the overarching strategies for visually representing interaction scenarios. To mitigate this task, we contribute a unified taxonomy of design elements that compose such figures. In particular, we provide a detailed classification of Structural and Interaction strategies, such as composition, visual techniques, dynamics, representation of users, and many others -- all in context of the type of scenarios. This taxonomy can inform researchers' choices when creating new figures, by providing a concise synthesis of visual strategies, and revealing approaches they were not aware of before. Furthermore, to support the community for creating further taxonomies, we also provide three open-source software facilitating the coding process and visual exploration of the coding scheme.
7
DistanciAR: Authoring Site-Specific Augmented Reality Experiences for Remote Environments
Zeyu Wang (Yale University, New Haven, Connecticut, United States)Cuong Nguyen (Adobe Research, San Francisco, California, United States)Paul Asente (Adobe, San Jose, California, United States)Julie Dorsey (Yale University, New Haven, Connecticut, United States)
Most augmented reality (AR) authoring tools only support the author's current environment, but designers often need to create site-specific experiences for a different environment. We propose DistanciAR, a novel tablet-based workflow for remote AR authoring. Our baseline solution involves three steps. A remote environment is captured by a camera with LiDAR; then, the author creates an AR experience from a different location using AR interactions; finally, a remote viewer consumes the AR content on site. A formative study revealed understanding and navigating the remote space as key challenges with this solution. We improved the authoring interface by adding two novel modes: Dollhouse, which renders a bird's-eye view, and Peek, which creates photorealistic composite images using captured images. A second study compared this improved system with the baseline, and participants reported that the new modes made it easier to understand and navigate the remote scene.
7
How to Evaluate Object Selection and Manipulation in VR? Guidelines from 20 Years of Studies
Joanna Bergström (University of Copenhagen, Copenhagen, Denmark)Tor-Salve Dalsgaard (University of Copenhagen, Copenhagen, Denmark)Jason Alexander (University of Bath, Bath, United Kingdom)Kasper Hornbæk (University of Copenhagen, Copenhagen, Denmark)
The VR community has introduced many object selection and manipulation techniques during the past two decades. Typically, they are empirically studied to establish their benefits over the state-of-the-art. However, the literature contains few guidelines on how to conduct such studies; standards developed for evaluating 2D interaction often do not apply. This lack of guidelines makes it hard to compare techniques across studies, to report evaluations consistently, and therefore to accumulate or replicate findings. To build such guidelines, we review 20 years of studies on VR object selection and manipulation. Based on the review, we propose recommendations for designing studies and a checklist for reporting them. We also identify research directions for improving evaluation methods and offer ideas for how to make studies more ecologically valid and rigorous.
7
Vinci: An Intelligent Graphic Design System for Generating Advertising Posters
Shunan Guo (Tongji University, ShangHai, China)Zhuochen Jin (Tongji University, Shanghai, China)Fuling Sun (Tongji University, Shanghai, China)Jingwen Li (Intelligent Big Data Visualization Lab, Tongji University, China, Shanghai, China)Zhaorui Li (Tongji University, Shanghai, China)Yang Shi (Tongji College of Design and Innovation, Shanghai, China)Nan Cao (Tongji College of Design and Innovation, Shanghai, China)
Advertising posters are a commonly used form of information presentation to promote a product. Producing advertising posters often takes much time and effort of designers when confronted with abundant choices of design elements and layouts. This paper presents Vinci, an intelligent system that supports the automatic generation of advertising posters. Given the user-specified product image and taglines, Vinci uses a deep generative model to match the product image with a set of design elements and layouts for generating an aesthetic poster. The system also integrates online editing-feedback that supports users in editing the posters and updating the generated results with their design preference. Through a series of user studies and a Turing test, we found that Vinci can generate posters as good as human designers and that the online editing-feedback improves the efficiency in poster modification.
7
Rethinking Eye-blink: Assessing Task Difficulty through Physiological Representation of Spontaneous Blinking
Youngjun Cho (University College London, London, United Kingdom)
Continuous assessment of task difficulty and mental workload is essential in improving the usability and accessibility of interactive systems. Eye tracking data has often been investigated to achieve this ability, with reports on the limited role of standard blink metrics. Here, we propose a new approach to the analysis of eye-blink responses for automated estimation of task difficulty. The core module is a time-frequency representation of eye-blink, which aims to capture the richness of information reflected on blinking. In our first study, we show that this method significantly improves the sensitivity to task difficulty. We then demonstrate how to form a framework where the represented patterns are analyzed with multi-dimensional Long Short-Term Memory recurrent neural networks for their non-linear mapping onto difficulty-related parameters. This framework outperformed other methods that used hand-engineered features. This approach works with any built-in camera, without requiring specialized devices. We conclude by discussing how Rethinking Eye-blink can benefit real-world applications.
7
BackTrack: 2D Back-of-device Interaction Through Front Touchscreen
Chang Xiao (Columbia University, New York, New York, United States)Karl Bayer (Snap Inc., New York, New York, United States)Changxi Zheng (Columbia University, New York, New York, United States)Shree K. Nayar (Snap, New York, New York, United States)
We present BackTrack, a trackpad placed on the back of a smartphone to track fine-grained finger motions. Our system has a small form factor, with all the circuits encapsulated in a thin layer attached to a phone case. It can be used with any off-the-shelf smartphone, requiring no power supply or modification of the operating systems. BackTrack simply extends the finger tracking area of the front screen, without interrupting the use of the front screen. It also provides a switch to prevent unintentional touch on the trackpad. All these features are enabled by a battery-free capacitive circuit, part of which is a transparent, thin-film conductor coated on a thin glass and attached to the front screen. To ensure accurate and robust tracking, the capacitive circuits are carefully designed. Our design is based on a circuit model of capacitive touchscreens, justified through both physics-based finite-element simulation and controlled laboratory experiments. We conduct user studies to evaluate the performance of using BackTrack. We also demonstrate its use in a number of smartphone applications.
7
TiltChair: Manipulative Posture Guidance by Actively Inclining the Seat of an Office Chair
Kazuyuki Fujita (Tohoku University, Sendai, Miyagi, Japan)Aoi Suzuki (Research Institute of Electrical Communication, Tohoku University, Sendai, Japan)Kazuki Takashima (Tohoku University, Sendai, Japan)Kaori Ikematsu (Yahoo Japan Corporation, Tokyo, Japan)Yoshifumi Kitamura (Tohoku University, Sendai, Japan)
We propose TiltChair, an actuated office chair that physically manipulates the user's posture by actively inclining the chair's seat to address problems associated with prolonged sitting. The system controls the inclination angle and motion speed with the aim of achieving manipulative but unobtrusive posture guidance. To demonstrate its potential, we first built a prototype of TiltChair with a seat that could be tilted by pneumatic control. We then investigated the effects of the seat's inclination angle and motions on task performance and overall sitting experience through two experiments. The results show that the inclination angle mainly affects the difficulty of maintaining one's posture, while the motion speed affected the conspicuousness and subjective acceptability of the motion. However, these seating conditions did not affect objective task performance. Based on these results, we propose a design space for facilitating effective seat-inclination behavior using the three dimensions of angle, speed, and continuity. Furthermore, we discuss promising applications.
7
Stereo-Smell via Electrical Trigeminal Stimulation
Jas Brooks (University of Chicago, Chicago, Illinois, United States)Shan-Yuan Teng (University of Chicago, Chicago, Illinois, United States)Jingxuan Wen (University of Chicago, Chicago, Illinois, United States)Romain Nith (University of Chicago, Chicago, Illinois, United States)Jun Nishida (University of Chicago, Chicago, Illinois, United States)Pedro Lopes (University of Chicago, Chicago, Illinois, United States)
We propose a novel type of olfactory device that creates a stereo-smell experience, i.e., directional information about the location of an odor, by rendering the readings of external odor sensors as trigeminal sensations using electrical stimulation of the user’s nasal septum. The key is that the sensations from the trigeminal nerve, which arise from nerve-endings in the nose, are perceptually fused with those of the olfactory bulb (the brain region that senses smells). As such, we propose that electrically stimulating the trigeminal nerve is an ideal candidate for stereo-smell augmentation/substitution that, unlike other approaches, does not require implanted electrodes in the olfactory bulb. To realize this, we engineered a self-contained device that users wear across their nasal septum. Our device outputs by stimulating the user’s trigeminal nerve using electrical impulses with variable pulse-widths; and it inputs by sensing the user’s inhalations using a photoreflector. It measures 10x23 mm and communicates with external gas sensors using Bluetooth. In our user study, we found the key electrical waveform parameters that enable users to feel an odor’s intensity (absolute electric charge) and direction (phase order and net charge). In our second study, we demonstrated that participants were able to localize a virtual smell source in the room by using our prototype without any previous training. Using these insights, our device enables expressive trigeminal sensations and could function as an assistive device for people with anosmia, who are unable to smell.
7
Tele-Immersive Improv: Effects of Immersive Visualisations on Rehearsing and Performing Theatre Online
Boyd Branch (University of Kent, Canterbury, Kent, United Kingdom)Christos Efstratiou (University of Kent, Canterbury, Kent, United Kingdom)Piotr Mirowski (HumanMachine, London, United Kingdom)Kory W. Mathewson (University of Alberta, Edmonton, Alberta, Canada)Paul Allain (University of Kent, Canterbury, United Kingdom)
Performers acutely need but lack tools to remotely rehearse and create live theatre, particularly due to global restrictions on social interactions during the Covid-19 pandemic. No studies, however, have heretofore examined how remote video-collaboration affects performance. This paper presents the findings of a field study with 16 domain experts over six weeks investigating how tele-immersion affects the rehearsal and performance of improvisational theatre. To conduct the study, an original media server was developed for co-locating remote performers into shared virtual 3D environments which were accessed through popular video conferencing software. The results of this qualitative study indicate that tele-immersive environments uniquely provide performers with a strong sense of co- presence, feelings of physical connection, and an increased ability to enter the social-flow states required for improvisational theatre. Based on our observations, we put forward design recommendations for video collaboration tools tailored to the unique demands of live performance.
6
JetController: High-speed Ungrounded 3-DoF Force Feedback Controllers using Air Propulsion Jets
Yu-Wei Wang (National Taiwan University, Taipei, Taiwan)Yu-Hsin Lin (National Taiwan University, Taipei, Taiwan)Pin-Sung Ku (National Taiwan University, Taipei, Taiwan)Yōko Miyatake (Ochanomizu University, Tokyo, Japan)Yi-Hsuan Mao (National Taiwan University, Taipei, Taiwan)Po-Yu Chen (National Taiwan University, Taipei, Taiwan)Chun-Miao Tseng (National Taiwan University, Taipei, Taiwan)Mike Y.. Chen (National Taiwan University, Taipei, Taiwan)
JetController is a novel haptic technology capable of supporting high-speed and persistent 3-DoF ungrounded force feedback. It uses high-speed pneumatic solenoid valves to modulate compressed air to achieve 20-50Hz of full impulses at 4.0-1.0N, and combines multiple air propulsion jets to generate 3-DoF force feedback. Compared to propeller-based approaches, JetController supports 10-30 times faster impulse frequency, and its handheld device is significantly lighter and more compact. JetController supports a wide range of haptic events in games and VR experiences, from firing automatic weapons in games like Halo (15Hz) to slicing fruits in Fruit Ninja (up to 45Hz). To evaluate JetController, we integrated our prototype with two popular VR games, Half-life: Alyx and Beat Saber, to support a variety of 3D interactions. Study results showed that JetController significantly improved realism, enjoyment, and overall experience compared to commercial vibrating controllers, and was preferred by most participants.
6
More Kawaii than a Real-Person Streamer: Understanding How the Otaku Community Engages with and Perceives Virtual YouTubers
Zhicong Lu (City University of Hong Kong, Hong Kong, China)Chenxinran Shen (University of Toronto, Toronto, Ontario, Canada)Jiannan Li (University of Toronto, Toronto, Ontario, Canada)Hong Shen (Carnegie Mellon University , Pittsburgh, Pennsylvania, United States)Daniel Wigdor (University of Toronto, Toronto, Ontario, Canada)
Live streaming has become increasingly popular, with most streamers presenting their real-life appearance. However, Virtual YouTubers (VTubers), virtual 2D or 3D avatars that are voiced by humans, are emerging as live streamers and attracting a growing viewership in East Asia. Although prior research has found that many viewers seek real-life interpersonal interactions with real-person streamers, it is currently unknown what makes VTuber live streams engaging or how they are perceived differently than real-person streamers. We conducted an interview study to understand how viewers engage with VTubers and perceive the identities of the voice actors behind the avatars (i.e., Nakanohito). The data revealed that Virtual avatars bring unique performative opportunities which result in different viewer expectations and interpretations of VTuber behavior. Viewers intentionally upheld the disembodiment of VTuber avatars from their voice actors. We uncover the nuances in viewer perceptions and attitudes and further discuss the implications of VTuber practices to the understanding of live streaming in general.
6
Understanding User Identification in Virtual Reality through Behavioral Biometrics and the Effect of Body Normalization
Jonathan Liebers (University of Duisburg-Essen, Essen, Germany)Uwe Gruenefeld (University of Duisburg-Essen, Essen, Germany)Lukas Mecke (Bundeswehr University Munich, Munich, Germany)Alia Saad (University of Duisburg-Essen, Essen, Germany)Jonas Auda (University of Duisburg-Essen, Essen, North Rhine-Westphalia, Germany)Florian Alt (Bundeswehr University Munich, Munich, Germany)Stefan Schneegass (University of Duisburg-Essen, Essen, Germany)Mark Abdelaziz (German University in Cairo, Cairo, Egypt)
Virtual Reality (VR) is becoming increasingly popular both in the entertainment and professional domains. Behavioral biometrics have recently been investigated as a means to continuously and implicitly identify users in VR. Applications in VR can specifically benefit from this, for example, to adapt virtual environments and user interfaces as well as to authenticate users. In this work, we conduct a lab study (N=16) to explore how accurately users can be identified during two task-driven scenarios based on their spatial movement. We show that an identification accuracy of up to 90 % is possible across sessions recorded on different days. Moreover, we investigate the role of users' physiology in behavioral biometrics by virtually altering and normalizing their body proportions. We find that body normalization in general increases the identification rate, in some cases by up to 38 %; hence, it improves the performance of identification systems.
6
GamesBond: Bimanual Haptic Illusion of Physically Connected Objects for Immersive VR Using Grip Deformation
Neung Ryu (KAIST, Daejeon, Korea, Republic of)Hye-Young Jo (KAIST, Daejeon, Korea, Republic of)Michel Pahud (Microsoft Research, Redmond, Washington, United States)Mike Sinclair (Microsoft, Redmond, Washington, United States)Andrea Bianchi (KAIST, Daejeon, Korea, Republic of)
Virtual Reality experiences, such as games and simulations, typically support the usage of bimanual controllers to interact with virtual objects. To recreate the haptic sensation of holding objects of various shapes and behaviors with both hands, previous researchers have used mechanical linkages between the controllers that render adjustable stiffness. However, the linkage cannot quickly adapt to simulate dynamic objects, nor it can be removed to support free movements. This paper introduces GamesBond, a pair of 4-DoF controllers without physical linkage but capable to create the illusion of being connected as a single device, forming a virtual bond. The two controllers work together by dynamically displaying and physically rendering deformations of hand grips, and so allowing users to perceive a single connected object between the hands, such as a jumping rope. With a user study and various applications we show that GamesBond increases the realism, immersion, and enjoyment of bimanual interaction.
6
Data@Hand: Fostering Visual Exploration of Personal Data on Smartphones Leveraging Speech and Touch Interaction
Young-Ho Kim (University of Maryland, College Park, Maryland, United States)Bongshin Lee (Microsoft Research, Redmond, Washington, United States)Arjun Srinivasan (Tableau Research, Seattle, Washington, United States)Eun Kyoung Choe (University of Maryland, College Park, Maryland, United States)
Most mobile health apps employ data visualization to help people view their health and activity data, but these apps provide limited support for visual data exploration. Furthermore, despite its huge potential benefits, mobile visualization research in the personal data context is sparse. This work aims to empower people to easily navigate and compare their personal health data on smartphones by enabling flexible time manipulation with speech. We designed and developed Data@Hand, a mobile app that leverages the synergy of two complementary modalities: speech and touch. Through an exploratory study with 13 long-term Fitbit users, we examined how multimodal interaction helps participants explore their own health data. Participants successfully adopted multimodal interaction (i.e., speech and touch) for convenient and fluid data exploration. Based on the quantitative and qualitative findings, we discuss design implications and opportunities with multimodal interaction for better supporting visual data exploration on mobile devices.
6
TapNet: The Design, Training, Implementation, and Applications of a Multi-Task Learning CNN for Off-Screen Mobile Input
Michael Xuelin Huang (Google, Mountain View, California, United States)Yang Li (Google Research, Mountain View, California, United States)Nazneen Nazneen (Google, Mountain View, California, United States)Alexander Chao (Google, Mountain View, California, United States)Shumin Zhai (Google, Mountain View, California, United States)
To make off-screen interaction without specialized hardware practical, we investigate using deep learning methods to process the common built-in IMU sensor (accelerometers and gyroscopes) on mobile phones into a useful set of one-handed interaction events. We present the design, training, implementation and applications of TapNet, a multi-task network that detects tapping on the smartphone. With phone form factor as auxiliary information, TapNet can jointly learn from data across devices and simultaneously recognize multiple tap properties, including tap direction and tap location. We developed two datasets consisting of over 135K training samples, 38K testing samples, and 32 participants in total. Experimental evaluation demonstrated the effectiveness of the TapNet design and its significant improvement over the state of the art. Along with the datasets, codebase, and extensive experiments, TapNet establishes a new technical foundation for off-screen mobile input.
6
EarRumble: Discreet Hands- and Eyes-Free Input by Voluntary Tensor Tympani Muscle Contraction
Tobias Röddiger (Karlsruhe Institute of Technology (KIT), Karlsruhe, Germany)Christopher Clarke (Lancaster University, Lancaster, United Kingdom)Daniel Wolffram (Karlsruhe Institute of Technology (KIT), Karlsruhe, Germany)Matthias Budde (Karlsruhe Institute of Technology (KIT), Karlsruhe, Germany)Michael Beigl (Karlsruhe Institute of Technology (KIT), Karlsruhe, Germany)
We explore how discreet input can be provided using the tensor tympani - a small muscle in the middle ear that some people can voluntarily contract to induce a dull rumbling sound. We investigate the prevalence and ability to control the muscle through an online questionnaire (N=192) in which 43.2% of respondents reported the ability to "ear rumble". Data collected from participants (N=16) shows how in-ear barometry can be used to detect voluntary tensor tympani contraction in the sealed ear canal. This data was used to train a classifier based on three simple ear rumble "gestures" which achieved 95% accuracy. Finally, we evaluate the use of ear rumbling for interaction, grounded in three manual, dual-task application scenarios (N=8). This highlights the applicability of EarRumble as a low-effort and discreet eyes- and hands-free interaction technique that users found "magical" and "almost telepathic".