注目の論文一覧

各カテゴリ上位30論文までを表示しています

The ACM CHI Conference on Human Factors in Computing Systems (https://chi2023.acm.org/)

15
FakeForward: Using Deepfake Technology for Feedforward Learning
Christopher Clarke (University of Bath, Bath, United Kingdom)Jingnan Xu (University of Bath, Bath, United Kingdom)Ye Zhu (University of Bath, Bath, United Kingdom)Karan Dharamshi (University of Bath, Bath, United Kingdom)Harry McGill (University of Bath, Bath, United Kingdom)Stephen Black (University of Bath, Bath, United Kingdom)Christof Lutteroth (University of Bath, Bath, United Kingdom)
Videos are commonly used to support learning of new skills, to improve existing skills, and as a source of motivation for training. Video self-modelling (VSM) is a learning technique that improves performance and motivation by showing a user a video of themselves performing a skill at a level they have not yet achieved. Traditional VSM is very data and labour intensive: a lot of video footage needs to be collected and manually edited in order to create an effective self-modelling video. We address this by presenting FakeForward -- a method which uses deepfakes to create self-modelling videos from videos of other people. FakeForward turns videos of better-performing people into effective, personalised training tools by replacing their face with the user’s. We investigate how FakeForward can be effectively applied and demonstrate its efficacy in rapidly improving performance in physical exercises, as well as confidence and perceived competence in public speaking.
13
PunchPrint: Creating Composite Fiber-Filament Craft Artifacts by Integrating Punch Needle Embroidery and 3D Printing
Ashley Del Valle (University of California Santa Barbara, Santa Barbara, California, United States)Mert Toka (University of California Santa Barbara, Santa Barbara, California, United States)Alejandro Aponte (University of California Santa Barbara, Santa Barbara, California, United States)Jennifer Jacobs (University of California Santa Barbara, Santa Barbara, California, United States)
New printing strategies have enabled 3D-printed materials that imitate traditional textiles. These filament-based textiles are easy to fabricate but lack the look and feel of fiber textiles. We seek to augment 3D-printed textiles with needlecraft to produce composite materials that integrate the programmability of additive fabrication with the richness of traditional textile craft. We present PunchPrint: a technique for integrating fiber and filament in a textile by combining punch needle embroidery and 3D printing. Using a toolpath that imitates textile weave structure, we print a flexible fabric that provides a substrate for punch needle production. We evaluate our material’s robustness through tensile strength and needle compatibility tests. We integrate our technique into a parametric design tool and produce functional artifacts that show how PunchPrint broadens punch needle craft by reducing labor in small, detailed artifacts, enabling the integration of openings and multiple yarn weights, and scaffolding soft 3D structures.
8
Imprimer: Computational Notebooks for CNC Milling
Jasper Tran O'Leary (University of Washington, Seattle, Washington, United States)Gabrielle Benabdallah (University of Washington, Seattle, Washington, United States)Nadya Peek (University of Washington, Seattle, Washington, United States)
Digital fabrication in industrial contexts involves standardized procedures that prioritize precision and repeatability. However, fabrication machines are now available for practitioners who focus instead on experimentation. In this paper, we reframe hobbyist CNC milling as writing literate programs which interleave documentation, interactive graphics, and source code for machine control. To test this approach, we present Imprimer, a machine infrastructure for a CNC mill and an associated library for a computational notebook. Imprimer lets makers learn experimentally, prototype new interactions for making, and understand physical processes by writing and debugging code. We demonstrate three experimental milling workflows as computational notebooks, conduct a user study with practitioners with a range of backgrounds, and discuss literate programming as a future vision for digital fabrication altogether.
6
3D Printable Play-Dough: New Biodegradable Materials and Creative Possibilities for Digital Fabrication
Leah Buechley (University of New Mexico, Albuquerque, New Mexico, United States)Ruby Ta (University of New Mexico, Albuquerque, New Mexico, United States)
Play-dough is a brightly-colored, easy-to-make, and familiar material. We have developed and tested custom play-dough materials that can be employed in 3D printers designed for clay. This paper introduces a set of recipes for 3D printable play-dough along with an exploration of these materials' print characteristics. We explore the design potential of play-dough as a sustainable fabrication material, highlighting its recyclability, compostability, and repairability. We demonstrate how custom-color prints can be designed and constructed and describe how play-dough can be used as a support material for clay 3D prints. We also present a set of example artifacts made from play-dough and discuss opportunities for future research.
5
Understanding Moderators' Conflict and Conflict Management Strategies with Streamers in Live Streaming Communities
Jie Cai (Penn State University, University Park, Pennsylvania, United States)Donghee Yvette Wohn (New Jersey Institute of Technology, Newark , New Jersey, United States)
As each micro community centered around the streamer attempts to set its own guidelines in live streaming communities, it is common for volunteer moderators (mods) and the streamer to disagree on how to handle various situations. In this study, we conducted an online survey (N=240) with live streaming mods to explore their commitment to the streamer to grow the micro community and the different styles in which they handle conflicts with the streamer. We found that 1) mods apply more active and cooperative styles than passive and assertive styles to manage conflicts, but they might be forced to do so, and 2) mods with strong commitments to the streamer would like to apply styles showing either high concerns for the streamer or low concerns for themselves. We reflect on how these results can affect micro community development and recommend designs to mitigate conflict and strengthen commitment.
4
Feel the Force, See the Force: Exploring Visual-tactile Associations of Deformable Surfaces with Colours and Shapes
Cameron Steer (University of Bath, Bath, United Kingdom)Teodora Dinca (University of Bath, Bath, United Kingdom)Crescent Jicol (University of Bath, Bath, United Kingdom)Michael J. Proulx (University of Bath, Bath, United Kingdom)Jason Alexander (University of Bath, Bath, United Kingdom)
Deformable interfaces provide unique interaction potential for force input, for example, when users physically push into a soft display surface. However, there remains limited understanding of which visual-tactile design elements signify the presence and stiffness of such deformable force-input components. In this paper, we explore how people correspond surface stiffness to colours, graphical shapes, and physical shapes. We conducted a cross-modal correspondence (CC) study, where 30 participants associated different surface stiffnesses with colours and shapes. Our findings evidence the CCs between stiffness levels for a subset of the 2D/3D shapes and colours used in the study. We distil our findings in three design recommendations: (1) lighter colours should be used to indicate soft surfaces, and darker colours should indicate stiff surfaces; (2) rounded shapes should be used to indicate soft surfaces, while less-curved shapes should be used to indicate stiffer surfaces, and; (3) longer 2D drop-shadows should be used to indicate softer surfaces, while shorter drop-shadows should be used to indicate stiffer surfaces.
4
A Human-Computer Collaborative Editing Tool for Conceptual Diagrams
Lihang Pan (Tsinghua University, Beijing, China)Chun Yu (Tsinghua University, Beijing, China)Zhe He (Tsinghua University, Beijing, Beijing, China)Yuanchun Shi (Tsinghua University, Beijing, China)
Editing (e.g., editing conceptual diagrams) is a typical office task that requires numerous tedious GUI operations, resulting in poor interaction efficiency and user experience, especially on mobile devices. In this paper, we present a new type of human-computer collaborative editing tool (CET) that enables accurate and efficient editing with little interaction effort. CET divides the task into two parts, and the human and the computer focus on their respective specialties: the human describes high-level editing goals with multimodal commands, while the computer calculates, recommends, and performs detailed operations. We conducted a formative study (N = 16) to determine the concrete task division and implemented the tool on Android devices for the specific tasks of editing concept diagrams. The user study (N = 24 + 20) showed that it increased diagram editing speed by 32.75% compared with existing state-of-the-art commercial tools and led to better editing results and user experience.
4
Augmenting Human Cognition with an AI-Mediated Intelligent Visual Feedback
Songlin Xu (University of California, San Diego, San Diego, California, United States)Xinyu Zhang (University of California San Diego, San Diego, California, United States)
In this paper, we introduce an AI-mediated framework that can provide intelligent feedback to augment human cognition. Specifically, we leverage deep reinforcement learning (DRL) to provide adaptive time pressure feedback to improve user performance in a math arithmetic task. Time pressure feedback could either improve or deteriorate user performance by regulating user attention and anxiety. Adaptive time pressure feedback controlled by a DRL policy according to users' real-time performance could potentially solve this trade-off problem. However, the DRL training and hyperparameter tuning may require large amounts of data and iterative user studies. Therefore, we propose a dual-DRL framework that trains a regulation DRL agent to regulate user performance by interacting with another simulation DRL agent that mimics user cognition behaviors from an existing dataset. Our user study demonstrates the feasibility and effectiveness of the dual-DRL framework in augmenting user performance, in comparison to the baseline group.
4
CiteSee: Augmenting Citations in Scientific Papers with Persistent and Personalized Historical Context
Joseph Chee Chang (Allen Institute for AI, Seattle, Washington, United States)Amy X.. Zhang (University of Washington, Seattle, Washington, United States)Jonathan Bragg (Allen Institute for Artificial Intelligence, Seattle, Washington, United States)Andrew Head (University of Pennsylvania, Philadelphia, Pennsylvania, United States)Kyle Lo (Allen Institute for Artificial Intelligence, Seattle, Washington, United States)Doug Downey (Allen Institute for Artificial Intelligence, Seattle, Washington, United States)Daniel S. Weld (Allen Institute for Artificial Intelligence, Seattle, Washington, United States)
When reading a scholarly article, inline citations help researchers contextualize the current article and discover relevant prior work. However, it can be challenging to prioritize and make sense of the hundreds of citations encountered during literature reviews. This paper introduces CiteSee, a paper reading tool that leverages a user's publishing, reading, and saving activities to provide personalized visual augmentations and context around citations. First, CiteSee connects the current paper to familiar contexts by surfacing known citations a user had cited or opened. Second, CiteSee helps users prioritize their exploration by highlighting relevant but unknown citations based on saving and reading history. We conducted a lab study that suggests CiteSee is significantly more effective for paper discovery than three baselines. A field deployment study shows CiteSee helps participants keep track of their explorations and leads to better situational awareness and increased paper discovery via inline citation when conducting real-world literature reviews.
3
Libraries of Things: Understanding the Challenges of Sharing Tangible Collections and the Opportunities for HCI
Lee Jones (Queen's University, Kingston, Ontario, Canada)Alaa Nousir (Queen's University, Kingston, Ontario, Canada)Tom Everrett (Ingenium - Canada's Museums of Science and Innovation, Ottawa, Ontario, Canada)Sara Nabil (Queen's University, Kingston, Ontario, Canada)
“Libraries of Things” are tangible collections of borrowable objects. There are many benefits to Libraries of Things such as making objects and skill-building accessible, reducing waste through the sharing of items, and saving costs associated with purchasing rarely-used items. We introduce the first HCI study of Library of Things by interviewing 23 librarians who run a variety of collections such as handheld tools, gear, and musical instruments – within public institutions and more grass-roots efforts in the private sector. In our findings, we discuss the challenges these collections experience in changing behavioural patterns from buying to borrowing, helping individuals `try new things', iterating to find sharable items, training staff, and manual intervention throughout the borrowing cycle. We present 5 opportunities for HCI research to support interactive skill-sharing, self-borrowing, maintenance recognition and cataloguing `things', organizing non-uniform inventories, and creating public-awareness. Further in-the-wild studies should also consider the tensions between the values of these organizations and low-cost convenient usage.
3
“I Won't Go Speechless”: Design Exploration on a Real-Time Text-To-Speech Speaking Tool for Videoconferencing
Wooseok Kim (KAIST, Daejeon, Korea, Republic of)Jian Jun (KAIST, Daejeon, Korea, Republic of)Minha Lee (KAIST, Daejeon, Korea, Republic of)Sangsu Lee (KAIST, Daejeon, Korea, Republic of)
The COVID-19 pandemic has shifted many business activities to non-face-to-face activities, and videoconferencing has become a new paradigm. However, conference spaces isolated from surrounding interferences are not always readily available. People frequently participate in public places with unexpected crowds or acquaintances, such as cafés, living rooms, and shared offices. These environments have surrounding limitations that potentially cause challenges in speaking up during videoconferencing. To alleviate these issues and support the users in speaking-restrained spatial contexts, we propose a text-to-speech (TTS) speaking tool as a new speaking method to support active videoconferencing participation. We derived the possibility of a TTS speaking tool and investigated the empirical challenges and user expectations of a TTS speaking tool using a technology probe and participatory design methodology. Based on our findings, we discuss the need for a TTS speaking tool and suggest design considerations for its application in videoconferencing.
3
The Walking Talking Stick: Understanding Automated Note-Taking in Walking Meetings
Luke Haliburton (LMU Munich, Munich, Germany)Natalia Bartłomiejczyk (Lodz University of Technology, Lodz, Poland)Albrecht Schmidt (LMU Munich, Munich, Germany)Paweł W. Woźniak (Chalmers University of Technology, Gothenburg, Sweden)Jasmin Niess (University of St. Gallen, St. Gallen, Switzerland)
While walking meetings offer a healthy alternative to sit-down meetings, they also pose practical challenges. Taking notes is difficult while walking, which limits the potential of walking meetings. To address this, we designed the Walking Talking Stick---a tangible device with integrated voice recording, transcription, and a physical highlighting button to facilitate note-taking during walking meetings. We investigated our system in a three-condition between-subjects user study with thirty pairs of participants (N=60) who conducted 15-minute outdoor walking meetings. Participants either used clip-on microphones, the prototype without the button, or the prototype with the highlighting button. We found that the tangible device increased task focus, and the physical highlighting button facilitated turn-taking and resulted in more useful notes. Our work demonstrates how interactive artifacts can incentivize users to hold meetings in motion and enhance conversation dynamics. We contribute insights for future systems which support conducting work tasks in mobile environments
3
Smartphone-derived Virtual Keyboard Dynamics Coupled with Accelerometer Data as a Window into Understanding Brain Health
Emma Ning (University of Illinois at Chicago, Chicago, Illinois, United States)Andrea T. Cladek (University of Illinois at Chicago, Chicago, Illinois, United States)Mindy K. Ross (University of Illinois at Chicago, Chicago, Illinois, United States)Sarah Kabir (University of Illinois at Chicago, Chicago, Illinois, United States)Amruta Barve (University of Illinois at Chicago, Chicago, Illinois, United States)Ellyn Kennelly (Wayne State University, Detroit, Michigan, United States)Faraz Hussain (University of Illinois at Chicago, Chicago, Illinois, United States)Jennifer Duffecy (University of Illinois at Chicago, Chicago, Illinois, United States)Scott Langenecker (University of Utah, Salt Lake City, Utah, United States)Theresa Nguyen (University of Illinois at Chicago, Chicago, Illinois, United States)Theja Tulabandhula (University of Illinois at Chicago, Chicago, Illinois, United States)John Zulueta (University of Illinois at Chicago, Chicago, Illinois, United States)Olusola A. Ajilore (University of Illinois, Chicago (UIC), Chicago, Illinois, United States)Alexander P. Demos (University of Illinois at Chicago, Chicago, Illinois, United States)Alex Leow (University of Illinois, Chicago (UIC), Chicago, Illinois, United States)
We examine the feasibility of using accelerometer data exclusively collected during typing on a custom smartphone keyboard to study whether typing dynamics are associated with daily variations in mood and cognition. As part of an ongoing digital mental health study involving mood disorders, we collected data from a well-characterized clinical sample (N = 85) and classified accelerometer data per typing session into orientation (upright vs. not) and motion (active vs. not). The mood disorder group showed lower cognitive performance despite mild symptoms (depression/mania). There were also diurnal pattern differences with respect to cognitive performance: individuals with higher cognitive performance typed faster and were less sensitive to time of day. They also exhibited more well-defined diurnal patterns in smartphone keyboard usage: they engaged with the keyboard more during the day and tapered their usage more at night compared to those with lower cognitive performance, suggesting a healthier usage of their phone.
3
“Should I Follow the Human, or Follow the Robot?” — Robots in Power Can Have More Influence Than Humans on Decision-Making
Yoyo Tsung-Yu. Hou (Cornell University, Ithaca, New York, United States)Wen-Ying Lee (Cornell University, Ithaca, New York, United States)Malte F. Jung (Cornell University, Ithaca, New York, United States)
Artificially intelligent (AI) agents such as robots are increasingly delegated power in work settings, yet it remains unclear how power functions in interactions with both humans and robots, especially when they directly compete for influence. Here we present an experiment where every participant was matched with one human and one robot to perform decision-making tasks. By manipulating who has power, we created three conditions: human as leader, robot as leader, and a no-power-difference control. The results showed that the participants were significantly more influenced by the leader, regardless of whether the leader was a human or a robot. However, they generally held a more positive attitude toward the human than the robot, although they considered whichever was in power as more competent. This study illustrates the importance of power for future Human-Robot Interaction (HRI) and Human-AI Interaction (HAI) research, as it addresses pressing concerns of society about AI-powered intelligent agents.
3
Subjective Probability Correction for Uncertainty Representations
Fumeng Yang (Northwestern University, Evanston, Illinois, United States)Maryam Hedayati (Northwestern University, Evanston, Illinois, United States)Matthew Kay (Northwestern University, Chicago, Illinois, United States)
We propose a new approach to uncertainty communication: we keep the uncertainty representation fixed, but adjust the distribution displayed to compensate for biases in people’s subjective probability in decision-making. To do so, we adopt a linear-in-probit model of subjective probability and derive two corrections to a Normal distribution based on the model’s intercept and slope: one correcting all right-tailed probabilities, and the other preserving the mode and one focal probability. We then conduct two experiments on U.S. demographically-representative samples. We show participants hypothetical U.S. Senate election forecasts as text or a histogram and elicit their subjective probabilities using a betting task. The first experiment estimates the linear-in-probit intercepts and slopes, and confirms the biases in participants’ subjective probabilities. The second, preregistered follow-up shows participants the bias-corrected forecast distributions. We find the corrections substantially improve participants’ decision quality by reducing the integrated absolute error of their subjective probabilities compared to the true probabilities. These corrections can be generalized to any univariate probability or confidence distribution, giving them broad applicability. Our preprint, code, data, and preregistration are available at https://doi.org/10.17605/osf.io/kcwxm.
3
The Tactile Dimension: A Method for Physicalizing Touch Behaviors
Laura J. Perovich (Northeastern University, Boston, Massachusetts, United States)Bernice Rogowitz (Visual Perspectives Research , Westchester, New York, United States)Victoria Crabb (Northeastern University, Boston, Massachusetts, United States)Jack Vogelsang (Northeastern University, Boston, Massachusetts, United States)Sara Hartleben (Northeastern University, Boston, Massachusetts, United States)Dietmar Offenhuber (Northeastern University, Boston, Massachusetts, United States)
Traces of touch provide valuable insight into how we interact with the physical world. Measuring touch behavior, however, is expensive and imprecise. Utilizing a fluorescent UV tracer powder, we developed a low-cost analog method to capture persistent, high-contrast touch records on arbitrary objects. We describe our process for selecting a tracer, methods for capturing, enhancing, and aggregating traces, and approaches to examining qualitative aspects of the user experience. Three user studies demonstrate key features of this method. First, we show that it provides clear and durable traces on objects representative of scientific visualization, physicalization, and product design. Second, we demonstrate how this method could be used to study touch perception, by measuring how task and narrative framing elicit different touch behaviors on the same object. Third, we demonstrate how this method can be used to evaluate data physicalizations by observing how participants touch two different physicalizations of COVID-19 time-series data.
3
Birds of a Feather Video-Flock Together: Design and Evaluation of an Agency-Based Parrot-to-Parrot Video-Calling System for Interspecies Ethical Enrichment
Rebecca Kleinberger (Northeastern University, Boston, Massachusetts, United States)Jennifer Cunha (Parrot Kindergarten, Jupiter, Florida, United States)Megha M. Vemuri (Massachusetts Institute of Technology, Cambridge, Massachusetts, United States)Ilyena Hirskyj-Douglas (The University of Glasgow, Glasgow, United Kingdom)
Over 20 million parrots are kept as pets in the US, often lacking appropriate stimuli to meet their high social, cognitive, and emotional needs. After reviewing bird perception and agency literature, we developed an approach to allow parrots to engage in video-calling other parrots. Following a pilot experiment and expert survey, we ran a three-month study with 18 pet birds to evaluate the potential value and usability of a parrot-parrot video-calling system. We assessed the system in terms of perception, agency, engagement, and overall perceived benefits. With 147 bird-triggered calls, our results show that 1) every bird used the system, 2) most birds exhibited high motivation and intentionality, and 3) all caretakers reported perceived benefits, some arguably life-transformative, such as learning to forage or even to fly by watching others. We report on individual insights and propose considerations regarding ethics and the potential of parrot video-calling for enrichment.
3
Exploring Challenges and Opportunities to Support Designers in Learning to Co-create with AI-based Manufacturing Design Tools
Frederic Gmeiner (Carnegie Mellon University, Pittsburgh, Pennsylvania, United States)Humphrey Yang (Carnegie Mellon University, Pittsburgh, Pennsylvania, United States)Lining Yao (Carnegie Mellon University, Pittsburgh, Pennsylvania, United States)Kenneth Holstein (Carnegie Mellon University, Pittsburgh, Pennsylvania, United States)Nikolas Martelaro (Carnegie Mellon University, Pittsburgh, Pennsylvania, United States)
AI-based design tools are proliferating in professional software to assist engineering and industrial designers in complex manufacturing and design tasks. These tools take on more agentic roles than traditional computer-aided design tools and are often portrayed as “co-creators.” Yet, working effectively with such systems requires different skills than working with complex CAD tools alone. To date, we know little about how engineering designers learn to work with AI-based design tools. In this study, we observed trained designers as they learned to work with two AI-based tools on a realistic design task. We find that designers face many challenges in learning to effectively co-create with current systems, including challenges in understanding and adjusting AI outputs and in communicating their design goals. Based on our findings, we highlight several design opportunities to better support designer-AI co-creation.
3
UndoPort: Exploring the Influence of Undo-Actions for Locomotion in Virtual Reality on the Efficiency, Spatial Understanding and User Experience
Florian Müller (LMU Munich, Munich, Germany)Arantxa Ye (LMU Munich, Munich, Germany)Dominik Schön (TU Darmstadt, Darmstadt, Germany)Julian Rasch (LMU Munich, Munich, Germany)
When we get lost in Virtual Reality (VR) or want to return to a previous location, we use the same methods of locomotion for the way back as for the way forward. This is time-consuming and requires additional physical orientation changes, increasing the risk of getting tangled in the headsets' cables. In this paper, we propose the use of undo actions to revert locomotion steps in VR. We explore eight different variations of undo actions as extensions of point\&teleport, based on the possibility to undo position and orientation changes together with two different visualizations of the undo step (discrete and continuous). We contribute the results of a controlled experiment with 24 participants investigating the efficiency and orientation of the undo techniques in a radial maze task. We found that the combination of position and orientation undo together with a discrete visualization resulted in the highest efficiency without increasing orientation errors.
3
User-Driven Constraints for Layout Optimisation in Augmented Reality
Aziz Niyazov (IRIT - University of Toulouse, Toulouse, France)Barrett Ens (Monash University, Melbourne, Australia)Kadek Ananta Satriadi (Monash University, Melbourne, Australia)Nicolas Mellado (CNRS, Toulouse, France)Loic Barthe (IRIT - University of Toulouse, Toulouse, France)Tim Dwyer (Monash University, Melbourne, VIC, Australia)Marcos Serrano (IRIT - Elipse, Toulouse, France)
Automatic layout optimisation allows users to arrange augmented reality content in the real-world environment without the need for tedious manual interactions. This optimisation is often based on modelling the intended content placement as constraints, defined as cost functions. Then, applying a cost minimization algorithm leads to a desirable placement. However, such an approach is limited by the lack of user control over the optimisation results. In this paper we explore the concept of user-driven constraints for augmented reality layout optimisation. With our approach users can define and set up their own constraints directly within the real-world environment. We first present a design space composed of three dimensions: the constraints, the regions of interest and the constraint parameters. Then we explore which input gestures can be employed to define the user-driven constraints of our design space through a user elicitation study. Using the results of the study, we propose a holistic system design and implementation demonstrating our user-driven constraints, which we evaluate in a final user study where participants had to create several constraints at the same time to arrange a set of virtual contents.
3
GestureExplorer: Immersive Visualisation and Exploration of Gesture Data
Ang Li (Monash University , Melbounre, Australia)Jiazhou Liu (Monash University, Melbourne, VIC, Australia)Maxime Cordeil (The University Of Queensland, Brisbane, Australia)Jack Topliss (University of Canterbury, Christchurch, Canterbury, New Zealand)Thammathip Piumsomboon (University of Canterbury, Christchurch, Canterbury, New Zealand)Barrett Ens (Monash University, Melbourne, Australia)
This paper presents the design and evaluation of GestureExplorer, an Immersive Analytics tool that supports the interactive exploration, classification and sensemaking with large sets of 3D temporal gesture data. GestureExplorer features 3D skeletal and trajectory visualisations of gestures combined with abstract visualisations of clustered sets of gestures. By leveraging the large immersive space afforded by a Virtual Reality interface our tool allows free navigation and control of viewing perspective for users to gain a better understanding of gestures. We explored a selection of classification methods to provide an overview of the dataset that was linked to a detailed view of the data that showed different visualisation modalities. We evaluated GestureExplorer with two user studies and collected feedback from participants with diverse visualisation and analytics backgrounds. Our results demonstrated the promising capability of GestureExplorer for providing a useful and engaging experience in exploring and analysing gesture data.
3
UEyes: Understanding Visual Saliency across User Interface Types
Yue Jiang (Aalto University, Espoo, Finland)Luis A.. Leiva (University of Luxembourg, Esch-sur-Alzette, Luxembourg)Hamed Rezazadegan Tavakoli (Nokia Technologies, Espoo, Finland)Paul R. B. Houssel (University of Luxembourg, Esch-sur-Alzette, Luxembourg)Julia Kylmälä (Aalto University, Espoo, Finland)Antti Oulasvirta (Aalto University, Helsinki, Finland)
While user interfaces (UIs) display elements such as images and text in a grid-based layout, UI types differ significantly in the number of elements and how they are displayed. For example, webpage designs rely heavily on images and text, whereas desktop UIs tend to feature numerous small images. To examine how such differences affect the way users look at UIs, we collected and analyzed a large eye-tracking-based dataset, \textit{UEyes} (62 participants and 1,980 UI screenshots), covering four major UI types: webpage, desktop UI, mobile UI, and poster. We analyze its differences in biases related to such factors as color, location, and gaze direction. We also compare state-of-the-art predictive models and propose improvements for better capturing typical tendencies across UI types. Both the dataset and the models are publicly available.
3
Co-Writing with Opinionated Language Models Affects Users' Views
Maurice Jakesch (Cornell University, Ithaca, New York, United States)Advait Bhat (Microsoft Research India, Bangalore, India)Daniel Buschek (University of Bayreuth, Bayreuth, Germany)Lior Zalmanson (Tel Aviv University, Tel Aviv, Tel Aviv District, Israel)Mor Naaman (Cornell Tech, New York, New York, United States)
If large language models like GPT-3 preferably produce a particular point of view, they may influence people's opinions on an unknown scale. This study investigates whether a language-model-powered writing assistant that generates some opinions more often than others impacts what users write -- and what they think. In an online experiment, we asked participants (N=1,506) to write a post discussing whether social media is good for society. Treatment group participants used a language-model-powered writing assistant configured to argue that social media is good or bad for society. Participants then completed a social media attitude survey, and independent judges (N=500) evaluated the opinions expressed in their writing. Using the opinionated language model affected the opinions expressed in participants' writing and shifted their opinions in the subsequent attitude survey. We discuss the wider implications of our results and argue that the opinions built into AI language technologies need to be monitored and engineered more carefully.
2
PumpVR: Rendering Weight of Objects and Avatars through Liquid Mass Transfer in Virtual Reality
Alexander Kalus (University of Regensburg, Regensburg, Germany)Martin Kocur (University of Regensburg, Regensburg, Germany)Johannes Klein (University of Regensburg, Regensburg, Germany)Manuel Mayer (University of Regensburg, Regensburg, Germany)Niels Henze (University of Regensburg, Regensburg, Germany)
Perceiving objects' and avatars’ weight in Virtual Reality (VR) is important to understand their properties and naturally interact with them. However, commercial VR controllers cannot render weight. Controllers presented by previous work are single-handed, slow, or only render a small mass. In this paper, we present PumpVR that renders weight by varying the controllers’ mass according to the properties of virtual objects or bodies. Using a bi-directional pump and solenoid valves, the system changes the controllers' absolute weight by transferring water in or out with an average error of less than 5%. We implemented VR use cases with objects and avatars of different weights to compare the system with standard controllers. A study with 24 participants revealed significantly higher realism and enjoyment when using PumpVR to interact with virtual objects. Using the system to render body weight had significant effects on virtual embodiment, perceived exertion, and self-perceived fitness.
2
Visible Nuances: A Caption System to Visualize Paralinguistic Speech Cues for Deaf and Hard-of-Hearing Individuals
JooYeong Kim (Gwangju Institute of Science and Technology, Gwangju, Korea, Republic of)Sooyeon Ahn (Gwangju Institute of Science and Technology, Gwangju, Korea, Republic of)Jin-Hyuk Hong (Gwangju Institute of Science and Technology, Gwangju, Korea, Republic of)
Captions help deaf and hard-of-hearing (DHH) individuals visually communicate voice information to better understand video content. In speech, the literal content and paralinguistic cues (e.g., pitch and nuance) work together to create real intention. However, current captions are limited in their capacity to deliver fine nuances because they cannot fully convey these paralinguistic cues. This paper proposes an audio-visualized caption system that automatically visualizes paralinguistic cues into various caption elements (thickness, height, font type and motion). A comparative study with 20 DHH participants demonstrates how our system supports DHH individuals to be better accessible to paralinguistic cues while watching videos. Particularly in the case of formal talks, they could accurately identify the speaker’s nuance more often compared to current captions, without any practice or training. Addressing some issues on legibility and familiarity, the proposed caption system has potentials to enrich DHH individuals’ video watching experience more as hearing people enjoy.
2
Imagine That! Imaginative Suggestibility Affects Presence in Virtual Reality
Crescent Jicol (University of Bath, Bath, United Kingdom)Christopher Clarke (University of Bath, Bath, United Kingdom)Emilia Tor (University of Bath , Bath, United Kingdom)Hiu Lam Yip (University of Bath, Bath, United Kingdom)Jinha Yoon (University of Bath, Bath, Somerset, United Kingdom)Chris Bevan (University of Bristol, Bristol, United Kingdom)Hugh Bowden (King's College London, London, United Kingdom)Elisa Brann (King's College London, London, United Kingdom)Kirsten Cater (University of Bristol, Bristol, United Kingdom)Richard Cole (University of Bristol, Bristol, United Kingdom)Quinton Deeley (King's College London, London, United Kingdom)Esther Eidinow (University of Bristol , Bristol, United Kingdom)Eamonn O'Neill (University of Bath, Bath, United Kingdom)Christof Lutteroth (University of Bath, Bath, United Kingdom)Michael J. Proulx (University of Bath, Bath, United Kingdom)
Personality characteristics can affect how much presence an individual experiences in virtual reality, and researchers have explored how it may be possible to prime users to increase their sense of presence. A personality characteristic that has yet to be explored in the VR literature is imaginative suggestibility, the ability of an individual to successfully experience an imaginary scenario as if it were real. In this paper, we explore how suggestibility and priming affect presence when consulting an ancient oracle in VR as part of an educational experience -- a common VR application. We show for the first time how imaginative suggestibility is a major factor which affects presence and emotions experienced in VR, while priming cues have no effect on participants' (n=128) user experience, contrasting results from prior work. We consider the impacts of these findings for VR design and provide guidelines based on our results.
2
ProxSituated Visualization: An Extended Model of Situated Visualization using Proxies for Physical Referents
Kadek Ananta Satriadi (University of South Australia, Adelaide, Australia)Andrew Cunningham (University of South Australia, Adelaide, Australia)Ross T. Smith (University of South Australia, Adelaide, Australia)Tim Dwyer (Monash University, Melbourne, Australia)Adam Mark. Drogemuller (University of South Australia, Adelaide, Australia)Bruce H. Thomas (University of South Australia, Mawson Lakes, South Australia, Australia)
Existing situated visualization models assume the user is able to directly interact with the objects and spaces to which the data refers (known as physical referents). We review a growing body of work exploring scenarios where the user interacts with a proxy representation of the physical referent rather than immediately with the object itself. This introduces a complex mixture of immediate situatedness and proxies of situatedness that goes beyond the expressiveness of current models. We propose an extended model of situated visualization that encompasses Immediate Situated Visualization and ProxSituated (Proxy of Situated) Visualization. Our model describes a set of key entities involved in proxSituated scenarios and important relationships between them. From this model, we derive design dimensions and apply them to existing situated visualization work. The resulting design space allows us to describe and evaluate existing scenarios, as well as to creatively generate new conceptual scenarios.
2
How Bold can we be? The impact of adjusting Font Grade on Readability in light and dark Polarities
Hilary Palmén (Google LLC, Mountain View, California, United States)Michael Dean. Gilbert (Google LLC, Mountain View, California, United States)Dave Crossland (Google LLC, Mountain View, California, United States)
Variable font file technology enables adjusting fonts on scaled axes that can include weight, and grade. While making text bold increases the character width, grade achieves boldness without increasing character width or causing text reflow. Through two studies with a total of 459 participants, we examined the effect of varying grade levels on both glancing and paragraph reading tasks in light and dark modes. We show that dark text on a light background (Light Mode) is read reliably faster than its polar opposite (Dark Mode). We found an effect of mode for both glance and paragraph reading and an effect of grade for LM with heavier, increased grade levels. Paragraph readers are not choosing, or preferring, LM over DM despite fluency benefits and reported visual clarity. Software designers can vary grade across the tested font formats to influence design aesthetics and user preferences without worrying about reducing reading fluency.
2
Take My Hand: Automated Hand-Based Spatial Guidance for the Visually Impaired
Adil Rahman (University of Virginia, Charlottesville, Virginia, United States)Md Aashikur Rahman Azim (University of Virginia, Charlottesville, Virginia, United States)Seongkook Heo (University of Virginia, Charlottesville, Virginia, United States)
Tasks that involve locating objects and then moving hands to those specific locations, such as using touchscreens or grabbing objects on a desk, are challenging for the visually impaired. Over the years, audio guidance and haptic feedback have been a staple in hand navigation based assistive technologies. However, these methods require the user to interpret the generated directional cues and then manually perform the hand motions. In this paper, we present automated hand-based spatial guidance to bridge the gap between guidance and execution, allowing visually impaired users to move their hands between two points automatically, without any manual effort. We implement this concept through FingerRover, an on-finger miniature robot that carries the user's finger to target points. We demonstrate the potential applications that can benefit from automated hand-based spatial guidance. Our user study shows the potential of our technique in improving the interaction capabilities of people with visual impairments.
2
Drawing Transforms: A Unifying Interaction Primitive to Procedurally Manipulate Graphics across Style, Space, and Time
Sonia Hashim (University of California Santa Barbara, Santa Barbara, California, United States)Tobias Höllerer (University of California, Santa Barbara, Santa Barbara, California, United States)Jennifer Jacobs (University of California Santa Barbara, Santa Barbara, California, United States)
Procedural functionality enables visual creators to rapidly edit, explore alternatives, and fine-tune artwork in many domains including illustration, motion graphics, and interactive animation. Symbolic procedural tools, such as textual programming languages, are highly expressive but often limit directly manipulating concrete artwork; whereas direct manipulation tools support some procedural expression but limit creators to pre-defined behaviors and inputs. Inspired by visions of using geometric input to create procedural relationships, we identify an opportunity to use vector geometry from artwork to specify expressive user-defined procedural functions. We present Drawing Transforms (DTs), a technique that enables the use of any drawing to procedurally transform the stylistic, spatial, and temporal properties of target artwork. We apply DTs in a prototype motion graphics system to author continuous and discrete transformations, modify multiple elements in a composition simultaneously, create animations, and control fine-grained procedural instantiation. We discuss how DTs can unify procedural authoring through direct manipulation across visual media domains.
2
Memory Manipulations in Extended Reality
Elise Bonnail (Institut Polytechnique de Paris, Paris, France)Wen-Jie Tseng (Institut Polytechnique de Paris, Paris, France)Mark McGill (University of Glasgow, Glasgow, Lanarkshire, United Kingdom)Eric Lecolinet (Institut Polytechnique de Paris, Paris, France)Samuel Huron (Télécom Paris, Institut Polytechnique de Paris, Palaiseau, ile de France, France)Jan Gugenheimer (TU-Darmstadt, Darmstadt, Germany)
Human memory has notable limitations (e.g., forgetting) which have necessitated a variety of memory aids (e.g., calendars). As we grow closer to mass adoption of everyday Extended Reality (XR), which is frequently leveraging perceptual limitations (e.g., redirected walking), it becomes pertinent to consider how XR could leverage memory limitations (forgetting, distorting, persistence) to induce memory manipulations. As memories highly impact our self-perception, social interactions, and behaviors, there is a pressing need to understand XR Memory Manipulations (XRMMs). We ran three speculative design workshops (n=12), with XR and memory researchers creating 48 XRMM scenarios. Through thematic analysis, we define XRMMs, present a framework of their core components and reveal three classes (at encoding, pre-retrieval, at retrieval). Each class differs in terms of technology (AR, VR) and impact on memory (influencing quality of memories, inducing forgetting, distorting memories). We raise ethical concerns and discuss opportunities of perceptual and memory manipulations in XR.
2
InfinitePaint: Painting in Virtual Reality with Passive Haptics Using Wet Brushes and a Physical Proxy Canvas
Andreas Rene. Fender (ETH Zürich, Zurich, Switzerland)Thomas Roberts (ETH Zürich, Zurich, Switzerland)Tiffany Luong (ETH Zürich, Zürich, Switzerland)Christian Holz (ETH Zürich, Zurich, Switzerland)
Digital painting interfaces require an input fidelity that preserves the artistic expression of the user. Drawing tablets allow for precise and low-latency sensing of pen motions and other parameters like pressure to convert them to fully digitized strokes. A drawback is that those interfaces are rigid. While soft brushes can be simulated in software, the haptic sensation of the rigid pen input device is different compared to using a soft wet brush on paper. We present InfinitePaint, a system that supports digital painting in Virtual Reality on real paper with a real wet brush. We use special paper that turns black wherever it comes into contact with water and turns blank again upon drying. A single camera captures those temporary strokes and digitizes them while applying properties like color or other digital effects. We tested our system with artists and compared the subjective experience with a drawing tablet.
2
“I normally wouldn't talk with strangers”: Introducing a Socio-Spatial Interface for Fostering Togetherness Between Strangers
Ge Guo (Cornell University, Ithaca, New York, United States)Gilly Leshed (Cornell University, Ithaca, New York, United States)Keith Evan. Green (Cornell University, Ithaca, New York, United States)
Interacting with strangers can be beneficial but also challenging. Fortunately, these challenges can lead to design opportunities. In this paper, we present the design and evaluation of a socio-spatial interface, SocialStools, that leverages the human propensity for embodied interaction to foster togetherness between strangers. SocialStools is an installation of three responsive stools on caster wheels that generate sound and imagery in the near environment as three strangers sit on them, move them, and rotate them relative to each other. In our study with 12 groups of three strangers, we found a sense of togetherness emerged through interaction, evidenced by different patterns of socio-spatial movements, verbal communication, non-verbal behavior, and interview responses. We present our findings, articulate reasons for the cultivation of togetherness, consider the unique social affordances of our spatial interface in shifting attention during interpersonal communication, and provide design implications. This research contributes insights toward designing cyber-physical interfaces that foster interaction and togetherness among strangers at a time when cultivating togetherness is especially critical.
2
AutomataStage: an AR-mediated Creativity Support Tool for Hands-on Multidisciplinary Learning
Yunwoo Jeong (KAIST, Daejeon, Korea, Republic of)Hyungjun Cho (KAIST, Daejeon, Korea, Republic of)Taewan Kim (KAIST, Daejeon, Korea, Republic of)Tek-Jin Nam (KAIST, Daejeon, Korea, Republic of)
The creativity support tools can enhance the hands-on multidisciplinary learning experience by drawing interest in the process of creating the outcome. We present AutomataStage, an AR-mediated creativity support tool for hands-on multidisciplinary learning. AutomataStage utilizes a video see-through interface to support the creation of Interactive Automata. The combination of building blocks and low-cost materials increases the expressiveness. The generative design method and one-to-one guide support the idea development process. It also provides a hardware see-through feature with which inside parts and circuits can be seen and an operational see-through feature that shows the operation in real-time. The visual programming method with a state transition diagram supports the iterative process during the creation process. A user study shows that AutomataStage enabled the students to create diverse Interactive Automata within 40-minute sessions. By creating Interactive Automata, the participants could learn the basic concepts of components. See-through features allowed active exploration with interest while integrating the components. We discuss the implications of hands-on tools with interactive and kinetic content beyond multidisciplinary learning.
2
TactorBots: A Haptic Design Toolkit for Out-of-lab Exploration of Emotional Robotic Touch
Ran Zhou (University of Colorado, Boulder, Boulder, Colorado, United States)Zachary Schwemler (University of Colorado, Boulder, Boulder, Colorado, United States)Akshay Baweja (Parsons School of Design, New York City, New York, United States)Harpreet Sareen (Parsons School of Design, New York City, New York, United States)Casey Lee. Hunt (University of Colorado, Boulder, Boulder, Colorado, United States)Daniel Leithinger (University of Colorado, Boulder, Boulder, Colorado, United States)
Emerging research has demonstrated the viability of emotional communication through haptic technology inspired by interpersonal touch. However, the meaning-making of artificial touch remains ambiguous and contextual. We see this ambiguity caused by robotic touch’s "otherness" as an opportunity for exploring alternatives. To empower emotional haptic design in longitudinal out-of-lab exploration, we devise TactorBots, a design toolkit consisting of eight wearable hardware modules for rendering robotic touch gestures controlled by a web-based software application. We deployed TactorBots to thirteen designers and researchers to validate its functionality, characterize its design experience, and analyze what, how, and why alternative perceptions, practices, contexts, and metaphors would emerge in the experiment. We provide suggestions for designing future toolkits and field studies based on our experiences. Reflecting on the findings, we derive design implications for further enhancing the ambiguity and shifting the mindsets to expand the design space.
2
Characteristics of Deep and Skim Reading on Smartphones vs. Desktop: A Comparative Study
Xiuge Chen (The University of Melbourne, Melbourne, Victoria, Australia)Namrata Srivastava (Monash University, Melbourne, Victoria, Australia)Rajiv Jain (Adobe Research, College Park, Maryland, United States)Jennifer Healey (Adobe Research, San Jose, California, United States)Tilman Dingler (University of Melbourne, Melbourne, Victoria, Australia)
Deep reading fosters text comprehension, memory, and critical thinking. The growing prevalance of digital reading on mobile interfaces raises concerns that deep reading is being replaced by skimming and sifting through information, but this is currently unmeasured. Traditionally, reading quality is assessed using comprehension tests, which require readers to explicitly answer a set of carefully composed questions. To quantify and understand reading behaviour in natural settings and at scale, however, implicit measures are needed of deep versus skim reading across desktop and mobile devices, the most prominent digital reading platforms. In this paper, we present an approach to systematically induce deep and skim reading and subsequently train classifiers to discriminate these two reading styles based on eye movement patterns and interaction data. Based on a user study with 29 participants, we created models that detect deep reading on both devices with up to 0.82 AUC. We present the characteristics of deep reading and discuss how our models can be used to measure the effect of reading UI design and monitor long-term changes in reading behaviours.
2
Crownboard: A One-Finger Crown-Based Smartwatch Keyboard for Users with Limited Dexterity
Gulnar Rakhmetulla (University of California, Merced, Merced, California, United States)Ahmed Sabbir. Arif (University of California, Merced, Merced, California, United States)
Mobile text entry is difficult for people with motor impairments due to limited access to smartphones and the need for precise target selection on touchscreens. Text entry on smartwatches, on the other hand, has not been well explored for the population. Crownboard enables people with limited dexterity enter text on a smartwatch using its crown. It uses an alphabetical layout divided into eight zones around the bezel. The zones are scanned either automatically or manually by rotating the crown, then selected by pressing the crown. Crownboard decodes zone sequences into words and displays word suggestions. We validated its design in multiple studies. First, a comparison between manual and automated scanning revealed that manual scanning is faster and more accurate. Second, a comparison between clockwise and shortest-path scanning identified the former to be faster and more accurate. In the final study with representative users, only 30% participants could use the default Qwerty. They were 9% and 23% faster with manual and automated Crownboard, respectively. All participants were able to use both variants of Crownboard.
2
Facilitating Experiential Training for Counselors using a Real-time Annotation Tool
Tianying Chen (Carnegie Mellon University, Pittsburgh, Pennsylvania, United States)Michael Xieyang Liu (Carnegie Mellon University, Pittsburgh, Pennsylvania, United States)Emily Ding (Carnegie Mellon University, Pittsburgh, Pennsylvania, United States)Emma O'Neil (University of Pennsylvania, Philadelphia, Pennsylvania, United States)Mansi Agarwal (Carnegie Mellon University, Pittsburgh, Pennsylvania, United States)Robert E. Kraut (Carnegie Mellon University, Pittsburgh, Pennsylvania, United States)Laura Dabbish (Carnegie Mellon University, Pittsburgh, Pennsylvania, United States)
Experiential training, where mental health professionals practice their learned skills, remains the most costly component of therapeutic training. We introduce Pin-MI, a video-call-based tool that supports experiential learning of counseling skills used in motivational interviewing (MI) through interactive role-play as client and counselor. In Pin-MI, counselors annotate, or "pin" the important moments in their role-play sessions in real-time. The pins are then used post-session to facilitate a reflective learning process, in which both client and counselor can provide feedback about what went well or poorly during each pinned moment. We discuss the design of Pin-MI and a qualitative evaluation with a set of healthcare professionals learning MI. Our evaluation suggests that Pin-MI helped users develop empathy, be more aware of their skill usage, guaranteed immediate and targeted feedback, and helped users correct misconceptions about their performance. We discuss implications for the design of experiential training tools for learning counseling skills.
2
Factors of Haptic Experience across multiple Haptic modalities
Ahmed Anwar (University of Waterloo, Waterloo, Ontario, Canada)Tianzheng Shi (University of Waterloo, Waterloo, Ontario, Canada)Oliver Schneider (University of Waterloo, Waterloo, Ontario, Canada)
Haptic Experience (HX) is a proposed set of quality criteria useful to haptics, with prior evidence for a 5-factor model with vibrotactile feedback. We report on an ongoing process of scale development to measure HX, and explore whether these criteria hold when applied to more diverse devices, including vibrotactile, force feedback, surface haptics, and mid-air haptics. From an in-person user study with 430 participants, exploratory factor analysis (EFA), and confirmatory factor analysis (CFA), we extract an 11-item and 4-factor model (Realism, Harmony, Involvement, Expressivity) with only a partial overlap to the previous model. We compare this model to the previous vibrotactile model, finding that the new 4-factor model is more generalized and can guide attributes or applications of new haptic systems. Our findings suggest that HX may vary depending on the modalities used in an application, but these four factors are general constructs that might overlap with modality-specific concepts of HX. These factors can inform designers about the right quality criteria to use when designing or evaluating haptic experiences for multiple modalities.
2
SwellSense: Creating 2.5D interactions with micro-capsule paper
Tingyu Cheng (Interactive Computing, Atlanta, Georgia, United States)Zhihan Zhang (University of Washington, Seattle, Washington, United States)Bingrui Zong (Georgia Institute of Technology, Atlanta, Georgia, United States)Yuhui Zhao (Georgia Institute of Technology, Atlanta, Georgia, United States)Zekun Chang (Cornell University, Ithaca, New York, United States)Ye Jun Kim (Georgia Institute of Technology, Atlanta, Georgia, United States)Clement Zheng (National University of Singapore, Singapore, Singapore, Singapore)Gregory D.. Abowd (Northeastern University, Boston, Massachusetts, United States)HyunJoo Oh (Georgia Institute of Technology, Atlanta, Georgia, United States)
In this paper, we propose SwellSense, a fabrication technique to screen print stretchable circuits onto a special micro-capsule paper, creating localized swelling patterns with sensing capabilities. This simple technique will allow users to create a wide range of paper-based tactile interactive devices, which are mostly maintaining 2D planar form factor but can also be curved or folded into 3D interactive artifacts. We first present the design guidelines to support various tactile interaction design including basic tactile graphic geometries, patterns with directional density, or finer interactive textures with embedded sensing such as touch sensor, pressure sensor, and mechanical switch. We then provide a design editor to enable users to design more creatively using the SwellSense technique. We provide a technical evaluation and user evaluation to validate the basic performance of SwellSense. Lastly, we demonstrate several application examples and conclude with a discussion on current limitations and future work.
2
Predicting Gaze-based Target Selection in Augmented Reality Headsets based on Eye and Head Endpoint Distributions
Yushi Wei (Xi'an Jiaotong-Liverpool University, Suzhou, China)Rongkai Shi (Xi'an Jiaotong-Liverpool University, Suzhou, Jiangsu, China)Difeng Yu (University of Melbourne, Melbourne, Victoria, Australia)Yihong Wang (Xi'an Jiaotong-Liverpool University, Suzhou, China)Yue Li (Xi'an Jiaotong-Liverpool University, Suzhou, Jiangsu, China)Lingyun Yu (Xi'an Jiaotong-Liverpool University, Suzhou, Jiangsu, China)Hai-Ning Liang (Xi'an Jiaotong-Liverpool University, Suzhou, Jiangsu, China)
Target selection is a fundamental task in interactive Augmented Reality (AR) systems. Predicting the intended target of selection in such systems can provide users with a smooth, low-friction interaction experience. Our work aims to predict gaze-based target selection in AR headsets with eye and head endpoint distributions, which describe the probability distribution of eye and head 3D orientation when a user triggers a selection input. We first conducted a user study to collect users’ eye and head behavior in a gaze-based pointing selection task with two confirmation mechanisms (air tap and blinking). Based on the study results, we then built two models: a unimodal model using only eye endpoints and a multimodal model using both eye and head endpoints. Results from a second user study showed that the pointing accuracy is improved by approximately 32% after integrating our models into gaze-based selection techniques.
2
“It can bring you in the right direction”: Episode-Driven Data Narratives to Help Patients Navigate Multidimensional Diabetes Data to Make Care Decisions
Shriti Raj (University of Michigan, Ann Arbor, Michigan, United States)Toshi Gupta (University of Michigan, Ann Arbor, Michigan, United States)Joyce Lee (University of Michigan, Ann Arbor, Michigan, United States)Matthew Kay (Northwestern University, Chicago, Illinois, United States)Mark W. Newman (U. of Michigan, Ann Arbor, Michigan, United States)
Engaging with multiple streams of personal health data to inform self-care of chronic health conditions remains a challenge. Existing informatics tools provide limited support for patients to make data actionable. To design better tools, we conducted two studies with Type 1 diabetes patients and their clinicians. In the first study, we observed data review sessions between patients and clinicians to articulate the tasks involved in assessing different types of data from diabetes devices to make care decisions. Drawing upon these tasks, we designed novel data interfaces called episode-driven data narratives and performed a task-driven evaluation. We found that as compared to the commercially available diabetes data reports, episode-driven data narratives improved engagement and decision-making with data. We discuss implications for designing data interfaces to support interaction with multidimensional health data to inform self-care.
2
Exploring Memory-Oriented Interactions with Digital Photos In and Across Time: A Field Study of Chronoscope
Amy Yo Sue Chen (Simon Fraser University, Surrey, British Columbia, Canada)William Odom (Simon Fraser University, Surrey, British Columbia, Canada)Carman Neustaedter (Simon Fraser University, Surrey, British Columbia, Canada)Ce Zhong (Simon Fraser University, Surrey, British Columbia, Canada)Henry Lin (Simon Fraser University, Surrey, British Columbia, Canada)
We describe a field study of Chronoscope, a tangible photo viewer that lets people revisit and explore their digital photos with the support of temporal metadata. Chronoscope offers different temporal modalities for organizing one’s personal digital photo archive, and for exploring possible connections in and across time, and among photos and memories. We deployed four Chronoscopes in four households for three months to understand participants’ experiences over time. Our goals are to investigate the reflective potential of temporal modalities as an alternative design approach for supporting memory-oriented photo exploration, and empirically explore conceptual propositions related to slow technology. Findings revealed that Chronoscope catalyzed a range of reflective experiences on their respective life histories and life stories. It opened up alternative ways of considering time and the potential longevity of personal photo archives. We conclude with implications to present opportunities for future HCI research and practice.
2
When do Data Visualizations Persuade? The Impact of Prior Attitudes on Learning about Correlations from Scatterplot Visualizations
Douglas Markant (University of North Carolina at Charlotte, Charlotte, North Carolina, United States)Milad Rogha (University of North Carolina at Charlotte, Charlotte, North Carolina, United States)Alireza Karduni (IDEO, Chicago, Illinois, United States)Ryan Wesslen (UNC Charlotte, Charlotte, North Carolina, United States)Wenwen Dou (UNC Charlotte, Charlotte, North Carolina, United States)
Data visualizations are vital to scientific communication on critical issues such as public health, climate change, and socioeconomic policy. They are often designed not just to inform, but to persuade people to make consequential decisions (e.g., to get vaccinated). Are such visualizations persuasive, especially when audiences have beliefs and attitudes that the data contradict? In this paper we examine the impact of existing attitudes (e.g., positive or negative attitudes toward COVID-19 vaccination) on changes in beliefs about statistical correlations when viewing scatterplot visualizations with different representations of statistical uncertainty. We find that strong prior attitudes are associated with smaller belief changes when presented with data that contradicts existing views, and that visual uncertainty representations may amplify this effect. Finally, even when participants' beliefs about correlations shifted their attitudes remained unchanged, highlighting the need for further research on whether data visualizations can drive longer-term changes in views and behavior.
2
The Effects of Avatar and Environment on Thermal Perception and Skin Temperature in Virtual Reality
Martin Kocur (University of Regensburg, Regensburg, Germany)Lukas Jackermeier (University of Regensburg, Regensburg, Germany)Valentin Schwind (Frankfurt University of Applied Sciences, Frankfurt, Germany)Niels Henze (University of Regensburg, Regensburg, Germany)
Humans' thermal regulation and subjective perception of temperature is highly plastic and depends on the visual appearance of the surrounding environment. Previous work shows that an environment’s color temperature affects the experienced temperature. As virtual reality (VR) enables visual immersion, recent work suggests that a VR scene's color temperature also affects experienced temperature. It is, however, unclear if an avatar’s appearance also affects users’ thermal perception and if a change in thermal perception even influences the body temperature. Therefore, we conducted a study with 32 participants performing a task in an ice or fire world while having ice or fire hands. We show that being in a fire world or having fire hands increases the perceived temperature. We even show that having fire hands decreases the hand temperature compared to having ice hands. We discuss the implications for the design of VR systems and future research directions.
2
What is Human-Centered about Human-Centered AI? A Map of the Research Landscape
Tara Capel (Queensland University of Technology, Brisbane, QLD, Australia)Margot Brereton (QUT, Brisbane, Brisbane, Australia)
The application of Artificial Intelligence (AI) across a wide range of domains comes with both high expectations of its benefits and dire predictions of misuse. While AI systems have largely been driven by a technology-centered design approach, the potential societal consequences of AI have mobilized both HCI and AI researchers towards researching human-centered artificial intelligence (HCAI). However, there remains considerable ambiguity about what it means to frame, design and evaluate HCAI. This paper presents a critical review of the large corpus of peer-reviewed literature emerging on HCAI in order to characterize what the community is defining as HCAI. Our review contributes an overview and map of HCAI research based on work that explicitly mentions the terms ‘human-centered artificial intelligence’ or ‘human-centered machine learning’ or their variations, and suggests future challenges and research directions. The map reveals the breadth of research happening in HCAI, established clusters and the emerging areas of Interaction with AI and Ethical AI. The paper contributes a new definition of HCAI, and calls for greater collaboration between AI and HCI research, and new HCAI constructs.
2
Love on the spectrum: Toward Inclusive online dating experience of autistic individuals
Dasom Choi (KAIST, Dajeon, Korea, Republic of)Sung-In Kim (Seoul Dasiseogi Homless Support Center , Seoul, Korea, Republic of)Sunok Lee (KAIST, Daejeon, Korea, Republic of)Hyunseung Lim (KAIST, Daejeon, Korea, Republic of)Hee Jeong Yoo (Seoul National University Bundang Hospital, Seongnam, Korea, Republic of)Hwajung Hong (KAIST, Deajeon, Korea, Republic of)
Online dating is a space where autistic individuals can find romantic partners with reduced social demands. Autistic individuals are often expected to adapt their behaviors to the social norms underlying the online dating platform to appear as desirable romantic partners. However, given that their autistic traits can lead them to different expectations of dating, it is uncertain whether conforming their behaviors to the norm will guide them to the person they truly want. In this paper, we explored the perceptions and expectations of autistic adults in online dating through interviews and workshops. We found that autistic people desired to know whether they behaved according to the platform's norms. Still, they expected to keep their unique characteristics rather than unconditionally conform to the norm. We conclude by providing suggestions for designing inclusive online dating experiences that could foster self-guided decisions of autistic users and embrace their unique characteristics.
2
FingerMapper: Mapping Finger Motions onto Virtual Arms to Enable Safe Virtual Reality Interaction in Confined Spaces
Wen-Jie Tseng (Institut Polytechnique de Paris, Paris, France)Samuel Huron (Télécom Paris, Institut Polytechnique de Paris, Palaiseau, ile de France, France)Eric Lecolinet (Institut Polytechnique de Paris, Paris, France)Jan Gugenheimer (Institut Polytechnique de Paris, Paris, France)
Whole-body movements enhance the presence and enjoyment of Virtual Reality (VR) experiences. However, using large gestures is often uncomfortable and impossible in confined spaces (e.g., public transport). We introduce FingerMapper, mapping small-scale finger motions onto virtual arms and hands to enable whole-body virtual movements in VR. In a first target selection study (n=13) comparing FingerMapper to hand tracking and ray-casting, we found that FingerMapper can significantly reduce physical motions and fatigue while having a similar degree of precision. In a consecutive study (n=13), we compared FingerMapper to hand tracking inside a confined space (the front passenger seat of a car). The results showed participants had significantly higher perceived safety and fewer collisions with FingerMapper while preserving a similar degree of presence and enjoyment as hand tracking. Finally, we present three example applications demonstrating how FingerMapper could be applied for locomotion and interaction for VR in confined spaces.
2
Supporting Piggybacked Co-Located Leisure Activities via Augmented Reality
Samantha Reig (Carnegie Mellon University, Pittsburgh, Pennsylvania, United States)Erica Principe Cruz (Carnegie Mellon University, Pittsburgh, Pennsylvania, United States)Melissa Powers (New York University, New York, New York, United States)Jennifer He (Stanford University, Stanford, California, United States)Timothy Chong (University of Washington, Seattle, Washington, United States)Yu Jiang Tham (Snap Inc., Seattle, Washington, United States)Sven Kratz (Snap, Inc., Seattle, Washington, United States)Ava Robinson (Northwestern University, Evanston, Illinois, United States)Brian A.. Smith (Columbia University, New York, New York, United States)Rajan Vaish (Snap Inc., Santa Monica, California, United States)Andrés Monroy-Hernández (Princeton University, Princeton, New Jersey, United States)
Technology, especially the smartphone, is villainized for taking meaning and time away from in-person interactions and secluding people into "digital bubbles''. We believe this is not an intrinsic property of digital gadgets, but evidence of a lack of imagination in technology design. Leveraging augmented reality (AR) toward this end allows us to create experiences for multiple people, their pets, and their environments. In this work, we explore the design of AR technology that "piggybacks'' on everyday leisure to foster co-located interactions among close ties (with other people and pets). We designed, developed, and deployed three such AR applications, and evaluated them through a 41-participant and 19-pet user study. We gained key insights about the ability of AR to spur and enrich interaction in new channels, the importance of customization, and the challenges of designing for the physical aspects of AR devices (e.g., holding smartphones). These insights guide design implications for the novel research space of co-located AR.
2
How Instructional Data Physicalization Fosters Reflection in Personal Informatics
Marit Bentvelzen (Utrecht University, Utrecht, Netherlands)Julia Dominiak (Lodz University of Technology, Łódź, Poland)Jasmin Niess (University of St. Gallen, St. Gallen, Switzerland)Frederique Henraat (Utrecht University, Utrecht, Netherlands)Paweł W. Woźniak (Chalmers University of Technology, Gothenburg, Sweden)
The ever-increasing number of devices quantifying our lives offers a perspective of high awareness of one's wellbeing, yet it remains a challenge for personal informatics (PI) to effectively support data-based reflection. Effective reflection is recognised as a key factor for PI technologies to foster wellbeing. Here, we investigate whether building tangible representations of health data can offer engaging and reflective experiences. We conducted a between-subjects study where n=60 participants explored their immediate blood pressure data in relation to medical norms. They either used a standard mobile app, built a data representation from LEGO bricks based on instructions, or completed a free-form brick build. We found that building with instructions fostered more comparison and using bricks fostered focused attention. The free-form condition required extra time to complete, and lacked usability. Our work shows that designing instructional physicalisation experiences for PI is a means of improving engagement and understanding of personal data.