注目の論文一覧

各カテゴリ上位30論文までを表示しています

The ACM CHI Conference on Human Factors in Computing Systems (https://chi2023.acm.org/)

15
FakeForward: Using Deepfake Technology for Feedforward Learning
Christopher Clarke (University of Bath, Bath, United Kingdom)Jingnan Xu (University of Bath, Bath, United Kingdom)Ye Zhu (University of Bath, Bath, United Kingdom)Karan Dharamshi (University of Bath, Bath, United Kingdom)Harry McGill (University of Bath, Bath, United Kingdom)Stephen Black (University of Bath, Bath, United Kingdom)Christof Lutteroth (University of Bath, Bath, United Kingdom)
Videos are commonly used to support learning of new skills, to improve existing skills, and as a source of motivation for training. Video self-modelling (VSM) is a learning technique that improves performance and motivation by showing a user a video of themselves performing a skill at a level they have not yet achieved. Traditional VSM is very data and labour intensive: a lot of video footage needs to be collected and manually edited in order to create an effective self-modelling video. We address this by presenting FakeForward -- a method which uses deepfakes to create self-modelling videos from videos of other people. FakeForward turns videos of better-performing people into effective, personalised training tools by replacing their face with the user’s. We investigate how FakeForward can be effectively applied and demonstrate its efficacy in rapidly improving performance in physical exercises, as well as confidence and perceived competence in public speaking.
13
PunchPrint: Creating Composite Fiber-Filament Craft Artifacts by Integrating Punch Needle Embroidery and 3D Printing
Ashley Del Valle (University of California Santa Barbara, Santa Barbara, California, United States)Mert Toka (University of California Santa Barbara, Santa Barbara, California, United States)Alejandro Aponte (University of California Santa Barbara, Santa Barbara, California, United States)Jennifer Jacobs (University of California Santa Barbara, Santa Barbara, California, United States)
New printing strategies have enabled 3D-printed materials that imitate traditional textiles. These filament-based textiles are easy to fabricate but lack the look and feel of fiber textiles. We seek to augment 3D-printed textiles with needlecraft to produce composite materials that integrate the programmability of additive fabrication with the richness of traditional textile craft. We present PunchPrint: a technique for integrating fiber and filament in a textile by combining punch needle embroidery and 3D printing. Using a toolpath that imitates textile weave structure, we print a flexible fabric that provides a substrate for punch needle production. We evaluate our material’s robustness through tensile strength and needle compatibility tests. We integrate our technique into a parametric design tool and produce functional artifacts that show how PunchPrint broadens punch needle craft by reducing labor in small, detailed artifacts, enabling the integration of openings and multiple yarn weights, and scaffolding soft 3D structures.
8
Imprimer: Computational Notebooks for CNC Milling
Jasper Tran O'Leary (University of Washington, Seattle, Washington, United States)Gabrielle Benabdallah (University of Washington, Seattle, Washington, United States)Nadya Peek (University of Washington, Seattle, Washington, United States)
Digital fabrication in industrial contexts involves standardized procedures that prioritize precision and repeatability. However, fabrication machines are now available for practitioners who focus instead on experimentation. In this paper, we reframe hobbyist CNC milling as writing literate programs which interleave documentation, interactive graphics, and source code for machine control. To test this approach, we present Imprimer, a machine infrastructure for a CNC mill and an associated library for a computational notebook. Imprimer lets makers learn experimentally, prototype new interactions for making, and understand physical processes by writing and debugging code. We demonstrate three experimental milling workflows as computational notebooks, conduct a user study with practitioners with a range of backgrounds, and discuss literate programming as a future vision for digital fabrication altogether.
6
3D Printable Play-Dough: New Biodegradable Materials and Creative Possibilities for Digital Fabrication
Leah Buechley (University of New Mexico, Albuquerque, New Mexico, United States)Ruby Ta (University of New Mexico, Albuquerque, New Mexico, United States)
Play-dough is a brightly-colored, easy-to-make, and familiar material. We have developed and tested custom play-dough materials that can be employed in 3D printers designed for clay. This paper introduces a set of recipes for 3D printable play-dough along with an exploration of these materials' print characteristics. We explore the design potential of play-dough as a sustainable fabrication material, highlighting its recyclability, compostability, and repairability. We demonstrate how custom-color prints can be designed and constructed and describe how play-dough can be used as a support material for clay 3D prints. We also present a set of example artifacts made from play-dough and discuss opportunities for future research.
5
Augmenting Human Cognition with an AI-Mediated Intelligent Visual Feedback
Songlin Xu (University of California, San Diego, San Diego, California, United States)Xinyu Zhang (University of California San Diego, San Diego, California, United States)
In this paper, we introduce an AI-mediated framework that can provide intelligent feedback to augment human cognition. Specifically, we leverage deep reinforcement learning (DRL) to provide adaptive time pressure feedback to improve user performance in a math arithmetic task. Time pressure feedback could either improve or deteriorate user performance by regulating user attention and anxiety. Adaptive time pressure feedback controlled by a DRL policy according to users' real-time performance could potentially solve this trade-off problem. However, the DRL training and hyperparameter tuning may require large amounts of data and iterative user studies. Therefore, we propose a dual-DRL framework that trains a regulation DRL agent to regulate user performance by interacting with another simulation DRL agent that mimics user cognition behaviors from an existing dataset. Our user study demonstrates the feasibility and effectiveness of the dual-DRL framework in augmenting user performance, in comparison to the baseline group.
5
Understanding Moderators' Conflict and Conflict Management Strategies with Streamers in Live Streaming Communities
Jie Cai (Penn State University, University Park, Pennsylvania, United States)Donghee Yvette Wohn (New Jersey Institute of Technology, Newark , New Jersey, United States)
As each micro community centered around the streamer attempts to set its own guidelines in live streaming communities, it is common for volunteer moderators (mods) and the streamer to disagree on how to handle various situations. In this study, we conducted an online survey (N=240) with live streaming mods to explore their commitment to the streamer to grow the micro community and the different styles in which they handle conflicts with the streamer. We found that 1) mods apply more active and cooperative styles than passive and assertive styles to manage conflicts, but they might be forced to do so, and 2) mods with strong commitments to the streamer would like to apply styles showing either high concerns for the streamer or low concerns for themselves. We reflect on how these results can affect micro community development and recommend designs to mitigate conflict and strengthen commitment.
4
CiteSee: Augmenting Citations in Scientific Papers with Persistent and Personalized Historical Context
Joseph Chee Chang (Allen Institute for AI, Seattle, Washington, United States)Amy X.. Zhang (University of Washington, Seattle, Washington, United States)Jonathan Bragg (Allen Institute for Artificial Intelligence, Seattle, Washington, United States)Andrew Head (University of Pennsylvania, Philadelphia, Pennsylvania, United States)Kyle Lo (Allen Institute for Artificial Intelligence, Seattle, Washington, United States)Doug Downey (Allen Institute for Artificial Intelligence, Seattle, Washington, United States)Daniel S. Weld (Allen Institute for Artificial Intelligence, Seattle, Washington, United States)
When reading a scholarly article, inline citations help researchers contextualize the current article and discover relevant prior work. However, it can be challenging to prioritize and make sense of the hundreds of citations encountered during literature reviews. This paper introduces CiteSee, a paper reading tool that leverages a user's publishing, reading, and saving activities to provide personalized visual augmentations and context around citations. First, CiteSee connects the current paper to familiar contexts by surfacing known citations a user had cited or opened. Second, CiteSee helps users prioritize their exploration by highlighting relevant but unknown citations based on saving and reading history. We conducted a lab study that suggests CiteSee is significantly more effective for paper discovery than three baselines. A field deployment study shows CiteSee helps participants keep track of their explorations and leads to better situational awareness and increased paper discovery via inline citation when conducting real-world literature reviews.
4
A Human-Computer Collaborative Editing Tool for Conceptual Diagrams
Lihang Pan (Tsinghua University, Beijing, China)Chun Yu (Tsinghua University, Beijing, China)Zhe He (Tsinghua University, Beijing, Beijing, China)Yuanchun Shi (Tsinghua University, Beijing, China)
Editing (e.g., editing conceptual diagrams) is a typical office task that requires numerous tedious GUI operations, resulting in poor interaction efficiency and user experience, especially on mobile devices. In this paper, we present a new type of human-computer collaborative editing tool (CET) that enables accurate and efficient editing with little interaction effort. CET divides the task into two parts, and the human and the computer focus on their respective specialties: the human describes high-level editing goals with multimodal commands, while the computer calculates, recommends, and performs detailed operations. We conducted a formative study (N = 16) to determine the concrete task division and implemented the tool on Android devices for the specific tasks of editing concept diagrams. The user study (N = 24 + 20) showed that it increased diagram editing speed by 32.75% compared with existing state-of-the-art commercial tools and led to better editing results and user experience.
4
“Should I Follow the Human, or Follow the Robot?” — Robots in Power Can Have More Influence Than Humans on Decision-Making
Yoyo Tsung-Yu. Hou (Cornell University, Ithaca, New York, United States)Wen-Ying Lee (Cornell University, Ithaca, New York, United States)Malte F. Jung (Cornell University, Ithaca, New York, United States)
Artificially intelligent (AI) agents such as robots are increasingly delegated power in work settings, yet it remains unclear how power functions in interactions with both humans and robots, especially when they directly compete for influence. Here we present an experiment where every participant was matched with one human and one robot to perform decision-making tasks. By manipulating who has power, we created three conditions: human as leader, robot as leader, and a no-power-difference control. The results showed that the participants were significantly more influenced by the leader, regardless of whether the leader was a human or a robot. However, they generally held a more positive attitude toward the human than the robot, although they considered whichever was in power as more competent. This study illustrates the importance of power for future Human-Robot Interaction (HRI) and Human-AI Interaction (HAI) research, as it addresses pressing concerns of society about AI-powered intelligent agents.
4
Feel the Force, See the Force: Exploring Visual-tactile Associations of Deformable Surfaces with Colours and Shapes
Cameron Steer (University of Bath, Bath, United Kingdom)Teodora Dinca (University of Bath, Bath, United Kingdom)Crescent Jicol (University of Bath, Bath, United Kingdom)Michael J. Proulx (University of Bath, Bath, United Kingdom)Jason Alexander (University of Bath, Bath, United Kingdom)
Deformable interfaces provide unique interaction potential for force input, for example, when users physically push into a soft display surface. However, there remains limited understanding of which visual-tactile design elements signify the presence and stiffness of such deformable force-input components. In this paper, we explore how people correspond surface stiffness to colours, graphical shapes, and physical shapes. We conducted a cross-modal correspondence (CC) study, where 30 participants associated different surface stiffnesses with colours and shapes. Our findings evidence the CCs between stiffness levels for a subset of the 2D/3D shapes and colours used in the study. We distil our findings in three design recommendations: (1) lighter colours should be used to indicate soft surfaces, and darker colours should indicate stiff surfaces; (2) rounded shapes should be used to indicate soft surfaces, while less-curved shapes should be used to indicate stiffer surfaces, and; (3) longer 2D drop-shadows should be used to indicate softer surfaces, while shorter drop-shadows should be used to indicate stiffer surfaces.
4
Co-Writing with Opinionated Language Models Affects Users' Views
Maurice Jakesch (Cornell University, Ithaca, New York, United States)Advait Bhat (Microsoft Research India, Bangalore, India)Daniel Buschek (University of Bayreuth, Bayreuth, Germany)Lior Zalmanson (Tel Aviv University, Tel Aviv, Tel Aviv District, Israel)Mor Naaman (Cornell Tech, New York, New York, United States)
If large language models like GPT-3 preferably produce a particular point of view, they may influence people's opinions on an unknown scale. This study investigates whether a language-model-powered writing assistant that generates some opinions more often than others impacts what users write -- and what they think. In an online experiment, we asked participants (N=1,506) to write a post discussing whether social media is good for society. Treatment group participants used a language-model-powered writing assistant configured to argue that social media is good or bad for society. Participants then completed a social media attitude survey, and independent judges (N=500) evaluated the opinions expressed in their writing. Using the opinionated language model affected the opinions expressed in participants' writing and shifted their opinions in the subsequent attitude survey. We discuss the wider implications of our results and argue that the opinions built into AI language technologies need to be monitored and engineered more carefully.
3
Smartphone-derived Virtual Keyboard Dynamics Coupled with Accelerometer Data as a Window into Understanding Brain Health
Emma Ning (University of Illinois at Chicago, Chicago, Illinois, United States)Andrea T. Cladek (University of Illinois at Chicago, Chicago, Illinois, United States)Mindy K. Ross (University of Illinois at Chicago, Chicago, Illinois, United States)Sarah Kabir (University of Illinois at Chicago, Chicago, Illinois, United States)Amruta Barve (University of Illinois at Chicago, Chicago, Illinois, United States)Ellyn Kennelly (Wayne State University, Detroit, Michigan, United States)Faraz Hussain (University of Illinois at Chicago, Chicago, Illinois, United States)Jennifer Duffecy (University of Illinois at Chicago, Chicago, Illinois, United States)Scott Langenecker (University of Utah, Salt Lake City, Utah, United States)Theresa Nguyen (University of Illinois at Chicago, Chicago, Illinois, United States)Theja Tulabandhula (University of Illinois at Chicago, Chicago, Illinois, United States)John Zulueta (University of Illinois at Chicago, Chicago, Illinois, United States)Olusola A. Ajilore (University of Illinois, Chicago (UIC), Chicago, Illinois, United States)Alexander P. Demos (University of Illinois at Chicago, Chicago, Illinois, United States)Alex Leow (University of Illinois, Chicago (UIC), Chicago, Illinois, United States)
We examine the feasibility of using accelerometer data exclusively collected during typing on a custom smartphone keyboard to study whether typing dynamics are associated with daily variations in mood and cognition. As part of an ongoing digital mental health study involving mood disorders, we collected data from a well-characterized clinical sample (N = 85) and classified accelerometer data per typing session into orientation (upright vs. not) and motion (active vs. not). The mood disorder group showed lower cognitive performance despite mild symptoms (depression/mania). There were also diurnal pattern differences with respect to cognitive performance: individuals with higher cognitive performance typed faster and were less sensitive to time of day. They also exhibited more well-defined diurnal patterns in smartphone keyboard usage: they engaged with the keyboard more during the day and tapered their usage more at night compared to those with lower cognitive performance, suggesting a healthier usage of their phone.
3
The Tactile Dimension: A Method for Physicalizing Touch Behaviors
Laura J. Perovich (Northeastern University, Boston, Massachusetts, United States)Bernice Rogowitz (Visual Perspectives Research , Westchester, New York, United States)Victoria Crabb (Northeastern University, Boston, Massachusetts, United States)Jack Vogelsang (Northeastern University, Boston, Massachusetts, United States)Sara Hartleben (Northeastern University, Boston, Massachusetts, United States)Dietmar Offenhuber (Northeastern University, Boston, Massachusetts, United States)
Traces of touch provide valuable insight into how we interact with the physical world. Measuring touch behavior, however, is expensive and imprecise. Utilizing a fluorescent UV tracer powder, we developed a low-cost analog method to capture persistent, high-contrast touch records on arbitrary objects. We describe our process for selecting a tracer, methods for capturing, enhancing, and aggregating traces, and approaches to examining qualitative aspects of the user experience. Three user studies demonstrate key features of this method. First, we show that it provides clear and durable traces on objects representative of scientific visualization, physicalization, and product design. Second, we demonstrate how this method could be used to study touch perception, by measuring how task and narrative framing elicit different touch behaviors on the same object. Third, we demonstrate how this method can be used to evaluate data physicalizations by observing how participants touch two different physicalizations of COVID-19 time-series data.
3
Exploring Challenges and Opportunities to Support Designers in Learning to Co-create with AI-based Manufacturing Design Tools
Frederic Gmeiner (Carnegie Mellon University, Pittsburgh, Pennsylvania, United States)Humphrey Yang (Carnegie Mellon University, Pittsburgh, Pennsylvania, United States)Lining Yao (Carnegie Mellon University, Pittsburgh, Pennsylvania, United States)Kenneth Holstein (Carnegie Mellon University, Pittsburgh, Pennsylvania, United States)Nikolas Martelaro (Carnegie Mellon University, Pittsburgh, Pennsylvania, United States)
AI-based design tools are proliferating in professional software to assist engineering and industrial designers in complex manufacturing and design tasks. These tools take on more agentic roles than traditional computer-aided design tools and are often portrayed as “co-creators.” Yet, working effectively with such systems requires different skills than working with complex CAD tools alone. To date, we know little about how engineering designers learn to work with AI-based design tools. In this study, we observed trained designers as they learned to work with two AI-based tools on a realistic design task. We find that designers face many challenges in learning to effectively co-create with current systems, including challenges in understanding and adjusting AI outputs and in communicating their design goals. Based on our findings, we highlight several design opportunities to better support designer-AI co-creation.
3
What is Human-Centered about Human-Centered AI? A Map of the Research Landscape
Tara Capel (Queensland University of Technology, Brisbane, QLD, Australia)Margot Brereton (QUT, Brisbane, Brisbane, Australia)
The application of Artificial Intelligence (AI) across a wide range of domains comes with both high expectations of its benefits and dire predictions of misuse. While AI systems have largely been driven by a technology-centered design approach, the potential societal consequences of AI have mobilized both HCI and AI researchers towards researching human-centered artificial intelligence (HCAI). However, there remains considerable ambiguity about what it means to frame, design and evaluate HCAI. This paper presents a critical review of the large corpus of peer-reviewed literature emerging on HCAI in order to characterize what the community is defining as HCAI. Our review contributes an overview and map of HCAI research based on work that explicitly mentions the terms ‘human-centered artificial intelligence’ or ‘human-centered machine learning’ or their variations, and suggests future challenges and research directions. The map reveals the breadth of research happening in HCAI, established clusters and the emerging areas of Interaction with AI and Ethical AI. The paper contributes a new definition of HCAI, and calls for greater collaboration between AI and HCI research, and new HCAI constructs.
3
Can Voice Assistants Be Microaggressors? Cross-Race Psychological Responses to Failures of Automatic Speech Recognition
Kimi Wenzel (Carnegie Mellon University, Pittsburgh, Pennsylvania, United States)Nitya Devireddy (Carnegie Mellon University, Pittsburgh, Pennsylvania, United States)Cam Davison (Carnegie Mellon University, Pittsburgh, Pennsylvania, United States)Geoff Kaufman (Carnegie Mellon University, Pittsburgh, Pennsylvania, United States)
Language technologies have a racial bias, committing greater errors for Black users than for white users. However, little work has evaluated what effect these disparate error rates have on users themselves. The present study aims to understand if speech recognition errors in human-computer interactions may mirror the same effects as misunderstandings in interpersonal cross-race communication. In a controlled experiment (N=108), we randomly assigned Black and white participants to interact with a voice assistant pre-programmed to exhibit a high versus low error rate. Results revealed that Black participants in the high error rate condition, compared to Black participants in the low error rate condition, exhibited significantly higher levels of self-consciousness, lower levels of self-esteem and positive affect, and less favorable ratings of the technology. White participants did not exhibit this disparate pattern. We discuss design implications and the diverse research directions to which this initial study aims to contribute.
3
Bias-Aware Systems: Exploring Indicators for the Occurrences of Cognitive Biases when Facing Different Opinions
Nattapat Boonprakong (University of Melbourne, Parkville, Victoria, Australia)Xiuge Chen (The University of Melbourne, Melbourne, Victoria, Australia)Catherine Davey (University of Melbourne, Parkville, Victoria, Australia)Benjamin Tag (University of Melbourne, Melbourne, Victoria, Australia)Tilman Dingler (University of Melbourne, Melbourne, Victoria, Australia)
Cognitive biases have been shown to play a critical role in creating echo chambers and spreading misinformation. They undermine our ability to evaluate information and can influence our behaviour without our awareness. To allow the study of occurrences and effects of biases on information consumption behaviour, we explore indicators for cognitive biases in physiological and interaction data. Therefore, we conducted two experiments investigating how people experience statements that are congruent or divergent from their own ideological stance. We collected interaction data, eye tracking data, hemodynamic responses, and electrodermal activity while participants were exposed to ideologically tainted statements. Our results indicate that people spend more time processing statements that are incongruent with their own opinion. We detected differences in blood oxygenation levels between congruent and divergent opinions, a first step towards building systems to detect and quantify cognitive biases.
3
The Walking Talking Stick: Understanding Automated Note-Taking in Walking Meetings
Luke Haliburton (LMU Munich, Munich, Germany)Natalia Bartłomiejczyk (Lodz University of Technology, Lodz, Poland)Albrecht Schmidt (LMU Munich, Munich, Germany)Paweł W. Woźniak (Chalmers University of Technology, Gothenburg, Sweden)Jasmin Niess (University of St. Gallen, St. Gallen, Switzerland)
While walking meetings offer a healthy alternative to sit-down meetings, they also pose practical challenges. Taking notes is difficult while walking, which limits the potential of walking meetings. To address this, we designed the Walking Talking Stick---a tangible device with integrated voice recording, transcription, and a physical highlighting button to facilitate note-taking during walking meetings. We investigated our system in a three-condition between-subjects user study with thirty pairs of participants (N=60) who conducted 15-minute outdoor walking meetings. Participants either used clip-on microphones, the prototype without the button, or the prototype with the highlighting button. We found that the tangible device increased task focus, and the physical highlighting button facilitated turn-taking and resulted in more useful notes. Our work demonstrates how interactive artifacts can incentivize users to hold meetings in motion and enhance conversation dynamics. We contribute insights for future systems which support conducting work tasks in mobile environments
3
Short-Form Videos Degrade Our Capacity to Retain Intentions: Effect of Context Switching On Prospective Memory
Francesco Chiossi (LMU Munich, Munich, Germany)Luke Haliburton (LMU Munich, Munich, Germany)Changkun Ou (LMU Munich, Munich, Germany)Andreas Martin. Butz (LMU Munich, Munich, Germany)Albrecht Schmidt (LMU Munich, Munich, Germany)
Social media platforms use short, highly engaging videos to catch users' attention. While the short-form video feeds popularized by TikTok are rapidly spreading to other platforms, we do not yet understand their impact on cognitive functions. We conducted a between-subjects experiment (N=60) investigating the impact of engaging with TikTok, Twitter, and YouTube while performing a Prospective Memory task (i.e., executing a previously planned action). The study required participants to remember intentions over interruptions. We found that the TikTok condition significantly degraded the users’ performance in this task. As none of the other conditions (Twitter, YouTube, no activity) had a similar effect, our results indicate that the combination of short videos and rapid context-switching impairs intention recall and execution. We contribute a quantified understanding of the effect of social media feed format on Prospective Memory and outline consequences for media technology designers to not harm the users’ memory and wellbeing.
3
Subjective Probability Correction for Uncertainty Representations
Fumeng Yang (Northwestern University, Evanston, Illinois, United States)Maryam Hedayati (Northwestern University, Evanston, Illinois, United States)Matthew Kay (Northwestern University, Chicago, Illinois, United States)
We propose a new approach to uncertainty communication: we keep the uncertainty representation fixed, but adjust the distribution displayed to compensate for biases in people’s subjective probability in decision-making. To do so, we adopt a linear-in-probit model of subjective probability and derive two corrections to a Normal distribution based on the model’s intercept and slope: one correcting all right-tailed probabilities, and the other preserving the mode and one focal probability. We then conduct two experiments on U.S. demographically-representative samples. We show participants hypothetical U.S. Senate election forecasts as text or a histogram and elicit their subjective probabilities using a betting task. The first experiment estimates the linear-in-probit intercepts and slopes, and confirms the biases in participants’ subjective probabilities. The second, preregistered follow-up shows participants the bias-corrected forecast distributions. We find the corrections substantially improve participants’ decision quality by reducing the integrated absolute error of their subjective probabilities compared to the true probabilities. These corrections can be generalized to any univariate probability or confidence distribution, giving them broad applicability. Our preprint, code, data, and preregistration are available at https://doi.org/10.17605/osf.io/kcwxm.
3
Birds of a Feather Video-Flock Together: Design and Evaluation of an Agency-Based Parrot-to-Parrot Video-Calling System for Interspecies Ethical Enrichment
Rebecca Kleinberger (Northeastern University, Boston, Massachusetts, United States)Jennifer Cunha (Parrot Kindergarten, Jupiter, Florida, United States)Megha M. Vemuri (Massachusetts Institute of Technology, Cambridge, Massachusetts, United States)Ilyena Hirskyj-Douglas (The University of Glasgow, Glasgow, United Kingdom)
Over 20 million parrots are kept as pets in the US, often lacking appropriate stimuli to meet their high social, cognitive, and emotional needs. After reviewing bird perception and agency literature, we developed an approach to allow parrots to engage in video-calling other parrots. Following a pilot experiment and expert survey, we ran a three-month study with 18 pet birds to evaluate the potential value and usability of a parrot-parrot video-calling system. We assessed the system in terms of perception, agency, engagement, and overall perceived benefits. With 147 bird-triggered calls, our results show that 1) every bird used the system, 2) most birds exhibited high motivation and intentionality, and 3) all caretakers reported perceived benefits, some arguably life-transformative, such as learning to forage or even to fly by watching others. We report on individual insights and propose considerations regarding ethics and the potential of parrot video-calling for enrichment.
3
Libraries of Things: Understanding the Challenges of Sharing Tangible Collections and the Opportunities for HCI
Lee Jones (Queen's University, Kingston, Ontario, Canada)Alaa Nousir (Queen's University, Kingston, Ontario, Canada)Tom Everrett (Ingenium - Canada's Museums of Science and Innovation, Ottawa, Ontario, Canada)Sara Nabil (Queen's University, Kingston, Ontario, Canada)
“Libraries of Things” are tangible collections of borrowable objects. There are many benefits to Libraries of Things such as making objects and skill-building accessible, reducing waste through the sharing of items, and saving costs associated with purchasing rarely-used items. We introduce the first HCI study of Library of Things by interviewing 23 librarians who run a variety of collections such as handheld tools, gear, and musical instruments – within public institutions and more grass-roots efforts in the private sector. In our findings, we discuss the challenges these collections experience in changing behavioural patterns from buying to borrowing, helping individuals `try new things', iterating to find sharable items, training staff, and manual intervention throughout the borrowing cycle. We present 5 opportunities for HCI research to support interactive skill-sharing, self-borrowing, maintenance recognition and cataloguing `things', organizing non-uniform inventories, and creating public-awareness. Further in-the-wild studies should also consider the tensions between the values of these organizations and low-cost convenient usage.
3
UEyes: Understanding Visual Saliency across User Interface Types
Yue Jiang (Aalto University, Espoo, Finland)Luis A.. Leiva (University of Luxembourg, Esch-sur-Alzette, Luxembourg)Hamed Rezazadegan Tavakoli (Nokia Technologies, Espoo, Finland)Paul R. B. Houssel (University of Luxembourg, Esch-sur-Alzette, Luxembourg)Julia Kylmälä (Aalto University, Espoo, Finland)Antti Oulasvirta (Aalto University, Helsinki, Finland)
While user interfaces (UIs) display elements such as images and text in a grid-based layout, UI types differ significantly in the number of elements and how they are displayed. For example, webpage designs rely heavily on images and text, whereas desktop UIs tend to feature numerous small images. To examine how such differences affect the way users look at UIs, we collected and analyzed a large eye-tracking-based dataset, \textit{UEyes} (62 participants and 1,980 UI screenshots), covering four major UI types: webpage, desktop UI, mobile UI, and poster. We analyze its differences in biases related to such factors as color, location, and gaze direction. We also compare state-of-the-art predictive models and propose improvements for better capturing typical tendencies across UI types. Both the dataset and the models are publicly available.
3
“I Won't Go Speechless”: Design Exploration on a Real-Time Text-To-Speech Speaking Tool for Videoconferencing
Wooseok Kim (KAIST, Daejeon, Korea, Republic of)Jian Jun (KAIST, Daejeon, Korea, Republic of)Minha Lee (KAIST, Daejeon, Korea, Republic of)Sangsu Lee (KAIST, Daejeon, Korea, Republic of)
The COVID-19 pandemic has shifted many business activities to non-face-to-face activities, and videoconferencing has become a new paradigm. However, conference spaces isolated from surrounding interferences are not always readily available. People frequently participate in public places with unexpected crowds or acquaintances, such as cafés, living rooms, and shared offices. These environments have surrounding limitations that potentially cause challenges in speaking up during videoconferencing. To alleviate these issues and support the users in speaking-restrained spatial contexts, we propose a text-to-speech (TTS) speaking tool as a new speaking method to support active videoconferencing participation. We derived the possibility of a TTS speaking tool and investigated the empirical challenges and user expectations of a TTS speaking tool using a technology probe and participatory design methodology. Based on our findings, we discuss the need for a TTS speaking tool and suggest design considerations for its application in videoconferencing.
3
UndoPort: Exploring the Influence of Undo-Actions for Locomotion in Virtual Reality on the Efficiency, Spatial Understanding and User Experience
Florian Müller (LMU Munich, Munich, Germany)Arantxa Ye (LMU Munich, Munich, Germany)Dominik Schön (TU Darmstadt, Darmstadt, Germany)Julian Rasch (LMU Munich, Munich, Germany)
When we get lost in Virtual Reality (VR) or want to return to a previous location, we use the same methods of locomotion for the way back as for the way forward. This is time-consuming and requires additional physical orientation changes, increasing the risk of getting tangled in the headsets' cables. In this paper, we propose the use of undo actions to revert locomotion steps in VR. We explore eight different variations of undo actions as extensions of point\&teleport, based on the possibility to undo position and orientation changes together with two different visualizations of the undo step (discrete and continuous). We contribute the results of a controlled experiment with 24 participants investigating the efficiency and orientation of the undo techniques in a radial maze task. We found that the combination of position and orientation undo together with a discrete visualization resulted in the highest efficiency without increasing orientation errors.
3
User-Driven Constraints for Layout Optimisation in Augmented Reality
Aziz Niyazov (IRIT - University of Toulouse, Toulouse, France)Barrett Ens (Monash University, Melbourne, Australia)Kadek Ananta Satriadi (Monash University, Melbourne, Australia)Nicolas Mellado (CNRS, Toulouse, France)Loic Barthe (IRIT - University of Toulouse, Toulouse, France)Tim Dwyer (Monash University, Melbourne, VIC, Australia)Marcos Serrano (IRIT - Elipse, Toulouse, France)
Automatic layout optimisation allows users to arrange augmented reality content in the real-world environment without the need for tedious manual interactions. This optimisation is often based on modelling the intended content placement as constraints, defined as cost functions. Then, applying a cost minimization algorithm leads to a desirable placement. However, such an approach is limited by the lack of user control over the optimisation results. In this paper we explore the concept of user-driven constraints for augmented reality layout optimisation. With our approach users can define and set up their own constraints directly within the real-world environment. We first present a design space composed of three dimensions: the constraints, the regions of interest and the constraint parameters. Then we explore which input gestures can be employed to define the user-driven constraints of our design space through a user elicitation study. Using the results of the study, we propose a holistic system design and implementation demonstrating our user-driven constraints, which we evaluate in a final user study where participants had to create several constraints at the same time to arrange a set of virtual contents.
3
The Intricacies of Social Robots: Secondary Analysis of Fictional Documentaries to Explore the Benefits and Challenges of Robots in Complex Social Settings
Judith Dörrenbächer (University of Siegen, Siegen, Germany)Ronda Ringfort-Felner (University of Siegen, Siegen, Germany)Marc Hassenzahl (University of Siegen, Siegen, Germany)
In the design of social robots, the focus is often on the robot itself rather than on the intricacies of possible application scenarios. In this paper, we examine eight fictional documentaries about social robots, such as SEYNO, a robot that promotes respect between passengers in trains, or PATO, a robot to watch movies with. Overall, robots were conceptualized either (1) to substitute humans in relationships or (2) to mediate relationships (human-human-robot-interaction). While the former is basis of many current approaches to social robotics, the latter is less common, but particularly interesting. For instance, the mediation perspective fundamentally impacts the role a robot takes (e.g., role model, black sheep, ally, opponent, moralizer) and thus its potential function and form. From the substitution perspective, robots are expected to mimic human emotions; from the mediation perspective, robots can be positive precisely because they remain objective and are neither emotional nor empathic.
3
ChameleonControl: Teleoperating Real Human Surrogates through Mixed Reality Gestural Guidance for Remote Hands-on Classrooms
Mehrad Faridan (University of Calgary, Calgary, Alberta, Canada)Bheesha Kumari (University of Calgary, Calgary, Alberta, Canada)Ryo Suzuki (University of Calgary, Calgary, Alberta, Canada)
We present ChameleonControl, a real-human teleoperation system for scalable remote instruction in hands-on classrooms. In contrast to existing video or AR/VR-based remote hands-on education, ChameleonControl uses a real human as a surrogate of a remote instructor. Building on existing human-based telepresence approaches, we contribute a novel method to teleoperate a human surrogate through synchronized mixed reality hand gestural navigation and verbal communication. By overlaying the remote instructor's virtual hands in the local user's MR view, the remote instructor can guide and control the local user as if they were physically present. This allows the local user/surrogate to synchronize their hand movements and gestures with the remote instructor, effectively teleoperating a real human. We deploy and evaluate our system in classrooms of physiotherapy training, as well as other application domains such as mechanical assembly, sign language and cooking lessons. The study results confirm that our approach can increase engagement and the sense of co-presence, showing potential for the future of remote hands-on classrooms.
3
GestureExplorer: Immersive Visualisation and Exploration of Gesture Data
Ang Li (Monash University , Melbounre, Australia)Jiazhou Liu (Monash University, Melbourne, VIC, Australia)Maxime Cordeil (The University Of Queensland, Brisbane, Australia)Jack Topliss (University of Canterbury, Christchurch, Canterbury, New Zealand)Thammathip Piumsomboon (University of Canterbury, Christchurch, Canterbury, New Zealand)Barrett Ens (Monash University, Melbourne, Australia)
This paper presents the design and evaluation of GestureExplorer, an Immersive Analytics tool that supports the interactive exploration, classification and sensemaking with large sets of 3D temporal gesture data. GestureExplorer features 3D skeletal and trajectory visualisations of gestures combined with abstract visualisations of clustered sets of gestures. By leveraging the large immersive space afforded by a Virtual Reality interface our tool allows free navigation and control of viewing perspective for users to gain a better understanding of gestures. We explored a selection of classification methods to provide an overview of the dataset that was linked to a detailed view of the data that showed different visualisation modalities. We evaluated GestureExplorer with two user studies and collected feedback from participants with diverse visualisation and analytics backgrounds. Our results demonstrated the promising capability of GestureExplorer for providing a useful and engaging experience in exploring and analysing gesture data.
2
Crownboard: A One-Finger Crown-Based Smartwatch Keyboard for Users with Limited Dexterity
Gulnar Rakhmetulla (University of California, Merced, Merced, California, United States)Ahmed Sabbir. Arif (University of California, Merced, Merced, California, United States)
Mobile text entry is difficult for people with motor impairments due to limited access to smartphones and the need for precise target selection on touchscreens. Text entry on smartwatches, on the other hand, has not been well explored for the population. Crownboard enables people with limited dexterity enter text on a smartwatch using its crown. It uses an alphabetical layout divided into eight zones around the bezel. The zones are scanned either automatically or manually by rotating the crown, then selected by pressing the crown. Crownboard decodes zone sequences into words and displays word suggestions. We validated its design in multiple studies. First, a comparison between manual and automated scanning revealed that manual scanning is faster and more accurate. Second, a comparison between clockwise and shortest-path scanning identified the former to be faster and more accurate. In the final study with representative users, only 30% participants could use the default Qwerty. They were 9% and 23% faster with manual and automated Crownboard, respectively. All participants were able to use both variants of Crownboard.
2
Exploring Memory-Oriented Interactions with Digital Photos In and Across Time: A Field Study of Chronoscope
Amy Yo Sue Chen (Simon Fraser University, Surrey, British Columbia, Canada)William Odom (Simon Fraser University, Surrey, British Columbia, Canada)Carman Neustaedter (Simon Fraser University, Surrey, British Columbia, Canada)Ce Zhong (Simon Fraser University, Surrey, British Columbia, Canada)Henry Lin (Simon Fraser University, Surrey, British Columbia, Canada)
We describe a field study of Chronoscope, a tangible photo viewer that lets people revisit and explore their digital photos with the support of temporal metadata. Chronoscope offers different temporal modalities for organizing one’s personal digital photo archive, and for exploring possible connections in and across time, and among photos and memories. We deployed four Chronoscopes in four households for three months to understand participants’ experiences over time. Our goals are to investigate the reflective potential of temporal modalities as an alternative design approach for supporting memory-oriented photo exploration, and empirically explore conceptual propositions related to slow technology. Findings revealed that Chronoscope catalyzed a range of reflective experiences on their respective life histories and life stories. It opened up alternative ways of considering time and the potential longevity of personal photo archives. We conclude with implications to present opportunities for future HCI research and practice.
2
Drawing Transforms: A Unifying Interaction Primitive to Procedurally Manipulate Graphics across Style, Space, and Time
Sonia Hashim (University of California Santa Barbara, Santa Barbara, California, United States)Tobias Höllerer (University of California, Santa Barbara, Santa Barbara, California, United States)Jennifer Jacobs (University of California Santa Barbara, Santa Barbara, California, United States)
Procedural functionality enables visual creators to rapidly edit, explore alternatives, and fine-tune artwork in many domains including illustration, motion graphics, and interactive animation. Symbolic procedural tools, such as textual programming languages, are highly expressive but often limit directly manipulating concrete artwork; whereas direct manipulation tools support some procedural expression but limit creators to pre-defined behaviors and inputs. Inspired by visions of using geometric input to create procedural relationships, we identify an opportunity to use vector geometry from artwork to specify expressive user-defined procedural functions. We present Drawing Transforms (DTs), a technique that enables the use of any drawing to procedurally transform the stylistic, spatial, and temporal properties of target artwork. We apply DTs in a prototype motion graphics system to author continuous and discrete transformations, modify multiple elements in a composition simultaneously, create animations, and control fine-grained procedural instantiation. We discuss how DTs can unify procedural authoring through direct manipulation across visual media domains.
2
Evaluating Large Language Models in Generating Synthetic HCI Research Data: a Case Study
Perttu Hämäläinen (Aalto University, Espoo, Finland)Mikke Tavast (Aalto University, Espoo, Finland)Anton Kunnari (University of Helsinki, Helsinki, Finland)
Collecting data is one of the bottlenecks of Human-Computer Interaction (HCI) research. Motivated by this, we explore the potential of large language models (LLMs) in generating synthetic user research data. We use OpenAI’s GPT-3 model to generate open-ended questionnaire responses about experiencing video games as art, a topic not tractable with traditional computational user models. We test whether synthetic responses can be distinguished from real responses, analyze errors of synthetic data, and investigate content similarities between synthetic and real data. We conclude that GPT-3 can, in this context, yield believable accounts of HCI experiences. Given the low cost and high speed of LLM data generation, synthetic data should be useful in ideating and piloting new experiments, although any findings must obviously always be validated with real data. The results also raise concerns: if employed by malicious users of crowdsourcing services, LLMs may make crowdsourcing of self-report data fundamentally unreliable.
2
AI Knowledge: Improving AI Delegation through Human Enablement
Marc Pinski (Technical University of Darmstadt, Darmstadt, Germany)Martin Adam (Technical University of Darmstadt, Darmstadt, Germany)Alexander Benlian (Technical University of Darmstadt, Darmstadt, Germany)
When collaborating with artificial intelligence (AI), humans can often delegate tasks to leverage complementary AI competencies. However, humans often delegate inefficiently. Enabling humans with knowledge about AI can potentially improve inefficient AI delegation. We conducted a between-subjects experiment (two groups, n = 111) to examine how enabling humans with AI knowledge can improve AI delegation in human-AI collaboration. We find that AI knowledge-enabled humans align their delegation decisions more closely with their assessment of how suitable a task is for humans or AI (i.e., task appraisal). We show that delegation decisions closely aligned with task appraisal increase task performance. However, we also find that AI knowledge lowers future intentions to use AI, suggesting that AI knowledge is not strictly positive for human-AI collaboration. Our study contributes to HCI design guidelines with a new perspective on AI features, educating humans regarding general AI functioning and their own (human) performance and biases.
2
Towards a Bedder Future: A Study of Using Virtual Reality while Lying Down
Thomas van Gemert (University of Copenhagen, Copenhagen, Denmark)Kasper Hornbæk (University of Copenhagen, Copenhagen, Denmark)Jarrod Knibbe (University of Melbourne, Melbourne, Australia)Joanna Bergström (University of Copenhagen, Copenhagen, Denmark)
Most contemporary Virtual Reality (VR) experiences are made for standing users. However, when a user is lying down---either by choice or necessity---it is unclear how they can walk around, dodge obstacles, or grab distant objects. We rotate the virtual coordinate space to study the movement requirements and user experience of using VR while lying down. Fourteen experienced VR users engaged with various popular VR applications for 40 minutes in a study using a think-aloud protocol and semi-structured interviews. Thematic analysis of captured videos and interviews reveals that using VR while lying down is comfortable and usable and that the virtual perspective produces a potent illusion of standing up. However, commonplace movements in VR are surprisingly difficult when lying down, and using alternative interactions is fatiguing and hampers performance. To conclude, we discuss design opportunities to tackle the most significant challenges and to create new experiences.
2
filtered.ink: Creating Dynamic Illustrations with SVG Filters
Tongyu Zhou (Brown University, Providence, Rhode Island, United States)Connie Liu (Brown University, Providence, Rhode Island, United States)Joshua Kong. Yang (University of Massachusetts Amherst, Amherst, Massachusetts, United States)Jeff Huang (Brown University, Providence, Rhode Island, United States)
Vector illustrations are object-based, meaning they are composed of strokes that can be filtered individually through textures or animations and transformed without loss of quality. These filters are typically difficult to specify without programming prerequisites. We propose filtered.ink, a full-featured illustration application to construct and explore filters via a node graph interface with a live preview. This turns vector graphics and their filters into a form of vector hypermedia that can be shared and remixed with new users. By examining interactions that occur when crafting, remixing, and using filters for dynamic illustrations through a task-based usability study, we expose new workflow patterns and avenues of expression. The observations result in a user model supported by filtered.ink: see, want, rewant, and remix. In this model, the artist breaks away from traditional notions of illustration, taking advantage of the inherent remixability of the strokes and filters in the vector graphics format.
2
Olfactory Wearables for Mobile Targeted Memory Reactivation
Judith Amores Fernandez (Microsoft, Cambridge, Massachusetts, United States)Nirmita Mehra (MIT, Cambridge, Massachusetts, United States)Bjoern Rasch (University of Freiburg, Freiburg, Switzerland)Pattie Maes (MIT Media Lab, Cambridge, Massachusetts, United States)
This paper investigates how a smartphone-controlled olfactory wearable might improve memory recall. We conducted a within-subjects experiment with 32 participants using the device and without (control). In the experimental condition, bursts of odor were released during visuo-spatial memory navigation tasks, and replayed during sleep the following night in the subjects' home. We found that compared to control, there was an improvement in memory performance when using the scent wearable in memory tasks that involved walking in a physical space. Furthermore, participants recalled more objects and translations when re-exposed to the same scent during the recall test, in addition to during sleep. These effects were statistically significant, and, in the object recall task, they also persisted for more than one week. This experiment demonstrates a potential practical application of olfactory interfaces that can interact with a user during wake as well as sleep to support memory.
2
Not all spacings are created equal: The Effect of Text Spacings in On-the-go Reading Using Optical See-Through Head-Mounted Displays
Chen Zhou (National University of Singapore, Singapore, Singapore)Katherine Fennedy (National University of Singapore, Singapore, Singapore)Felicia Fang-Yi. Tan (National University of Singapore, Singapore, Singapore)Shengdong Zhao (National University of Singapore, Singapore, Singapore)Yurui Shao (National University of Singapore , Singapore, Singapore , Singapore)
The emergent Optical Head-Mounted Display (OHMD) platform has made mobile reading possible by superimposing digital text onto users’ view of the environment. However, mobile reading through OHMD needs to be effectively balanced with the user's environmental awareness. Hence, a series of studies were conducted to explore how text spacing strategies facilitate such balance. Through these studies, it was found that increasing spacing within the text can significantly enhance mobile reading on OHMDs in both simple and complex navigation scenarios and that such benefits mainly come from increasing the inter-line spacing, but not inter-word spacing. Compared with existing positioning strategies, increasing inter-line spacing improves mobile OHMD information reading in terms of reading speed (11.9% faster), walking speed (3.7% faster), and switching between reading and navigation (106.8% more accurate and 33% faster).
2
Partially Blended Realities: Aligning Dissimilar Spaces for Distributed Mixed Reality Meetings
Jens Emil Sloth. Grønbæk (Aarhus University, Aarhus, Denmark)Ken Pfeuffer (Aarhus University, Aarhus, Denmark)Eduardo Velloso (University of Melbourne, Melbourne, Victoria, Australia)Morten Astrup (Aarhus University, Aarhus, Denmark)Melanie Isabel Sønderkær Pedersen (Aarhus University, Aarhus, Denmark)Martin Kjær (Aarhus University, Aarhus, Denmark)Germán Leiva (Aarhus University, Aarhus, Denmark)Hans Gellersen (Lancaster University, Lancaster, United Kingdom)
Mixed Reality allows for distributed meetings where people's local physical spaces are virtually aligned into blended interaction spaces. In many cases, people's physical rooms are dissimilar, making it challenging to design a coherent blended space. We introduce the concept of Partially Blended Realities (PBR) --- using Mixed Reality to support remote collaborators in partially aligning their physical spaces. As physical surfaces are central in collaborative work, PBR supports users in transitioning between different configurations of tables and whiteboard surfaces. In this paper, we 1) describe the design space of PBR, 2) present RealityBlender to explore interaction techniques for how users may configure and transition between blended spaces, and 3) provide insights from a study on how users experience transitions in a remote collaboration task. With this work, we demonstrate new potential for using partial solutions to tackle the alignment problem of dissimilar spaces in distributed Mixed Reality meetings.
2
Classifying Head Movements to Separate Head-Gaze and Head Gestures as Distinct Modes of Input
Baosheng James HOU (Lancaster University , Lancaster , United Kingdom)Joshua Newn (Lancaster University, Lancaster, Lancashire, United Kingdom)Ludwig Sidenmark (Lancaster University, Lancaster, United Kingdom)Anam Ahmad Khan (National University of Science and Technology, ISLAMABAD, Pakistan)Per Bækgaard (Technical University of Denmark, Kgs. Lyngby, Denmark)Hans Gellersen (Lancaster University, Lancaster, United Kingdom)
Head movement is widely used as a uniform type of input for human-computer interaction. However, there are fundamental differences between head movements coupled with gaze in support of our visual system, and head movements performed as gestural expression. Both Head-Gaze and Head Gestures are of utility for interaction but differ in their affordances. To facilitate the treatment of Head-Gaze and Head Gestures as separate types of input, we developed HeadBoost as a novel classifier, achieving high accuracy in classifying gaze-driven versus gestural head movement (F1-Score: 0.89). We demonstrate the utility of the classifier with three applications: gestural input while avoiding unintentional input by Head-Gaze; target selection with Head-Gaze while avoiding Midas Touch by head gestures; and switching of cursor control between Head-Gaze for fast positioning and Head Gesture for refinement. The classification of Head-Gaze and Head Gesture allows for seamless head-based interaction while avoiding false activation.
2
Don’t Just Tell Me, Ask Me: AI Systems that Intelligently Frame Explanations as Questions Improve Human Logical Discernment Accuracy over Causal AI explanations
Valdemar Danry (MIT, CAMBRIDGE, Massachusetts, United States)Pat Pataranutaporn (MIT, Boston, Massachusetts, United States)Yaoli Mao (Columbia University, New York, New York, United States)Pattie Maes (MIT Media Lab, Cambridge, Massachusetts, United States)
Critical thinking is an essential human skill. Despite the importance of critical thinking, research reveals that our reasoning ability suffers from personal biases and cognitive resource limitations, leading to potentially dangerous outcomes. This paper presents the novel idea of AI-framed Questioning that turns information relevant to the AI classification into questions to actively engage users' thinking and scaffold their reasoning process. We conducted a study with 204 participants comparing the effects of AI-framed Questioning on a critical thinking task; discernment of logical validity of socially divisive statements. Our results show that compared to no feedback and even causal AI explanations of an always correct system, AI-framed Questioning significantly increase human discernment of logically flawed statements. Our experiment exemplifies a future style of Human-AI co-reasoning system, where the AI becomes a critical thinking stimulator rather than an information teller.
2
"I Am a Mirror Dweller": Probing the Unique Strategies Users Take to Communicate in the Context of Mirrors in Social Virtual Reality
Kexue Fu (Hongshen Honors School, Choingqing, China)Yixin Chen (University Of Aberdeen, Aberdeen, United Kingdom)Jiaxun Cao (Duke Kunshan University, Kunshan, Jiangsu, China)Xin Tong (Duke Kunshan University, Kunshan, Suzhou, China)RAY LC (City University of Hong Kong, Hong Kong, Hong Kong)
Increasingly popular social virtual reality (VR) platforms like VRChat created new ways for people to interact with each other, generating dedicated user communities with unique idioms of socializing in an alternative world. In VRChat, users frequently gather in front of mirrors en masse during online interactions. Understanding how user communities deal with the mirror's unique interactions can generate insights for supporting communication in social VR. In this study, we investigated the mirror’s synergistic effect with avatars on behaviors and dedicated user conversational performance. Qualitative findings indicate that avatar-mediated communication through mirrors provides functions like ensuring synchronization of incarnations, increasing immersion, and enhancing idealized embodiment to express bolder behaviors anonymously. Quantitative studies show that while mirrors improve self-perception, it has a potentially adverse effect on conversational performance, similar to the role of self-viewing in video conferencing. Studying how users interact with mirrors in an immersive environment allows us to explore how digital environments affect spatialized interactions when transported from physical to digital domains.
2
PumpVR: Rendering Weight of Objects and Avatars through Liquid Mass Transfer in Virtual Reality
Alexander Kalus (University of Regensburg, Regensburg, Germany)Martin Kocur (University of Regensburg, Regensburg, Germany)Johannes Klein (University of Regensburg, Regensburg, Germany)Manuel Mayer (University of Regensburg, Regensburg, Germany)Niels Henze (University of Regensburg, Regensburg, Germany)
Perceiving objects' and avatars’ weight in Virtual Reality (VR) is important to understand their properties and naturally interact with them. However, commercial VR controllers cannot render weight. Controllers presented by previous work are single-handed, slow, or only render a small mass. In this paper, we present PumpVR that renders weight by varying the controllers’ mass according to the properties of virtual objects or bodies. Using a bi-directional pump and solenoid valves, the system changes the controllers' absolute weight by transferring water in or out with an average error of less than 5%. We implemented VR use cases with objects and avatars of different weights to compare the system with standard controllers. A study with 24 participants revealed significantly higher realism and enjoyment when using PumpVR to interact with virtual objects. Using the system to render body weight had significant effects on virtual embodiment, perceived exertion, and self-perceived fitness.
2
Using Pseudo-Stiffness to Enrich the Haptic Experience in Virtual Reality
Yannick Weiss (LMU Munich, Munich, Germany)Steeven Villa (LMU Munich, Munich, Germany)Albrecht Schmidt (LMU Munich, Munich, Germany)Sven Mayer (LMU Munich, Munich, Germany)Florian Müller (LMU Munich, Munich, Germany)
Providing users with a haptic sensation of the hardness and softness of objects in virtual reality is an open challenge. While physical props and haptic devices help, their haptic properties do not allow for dynamic adjustments. To overcome this limitation, we present a novel technique for changing the perceived stiffness of objects based on a visuo-haptic illusion. We achieved this by manipulating the hands' Control-to-Display (C/D) ratio in virtual reality while pressing down on an object with fixed stiffness. In the first study (N=12), we determine the detection thresholds of the illusion. Our results show that we can exploit a C/D ratio from 0.7 to 3.5 without user detection. In the second study (N=12), we analyze the illusion's impact on the perceived stiffness. Our results show that participants perceive the objects to be up to 28.1% softer and 8.9% stiffer, allowing for various haptic applications in virtual reality.
2
Understanding (Non-)Visual Needs for the Design of Laser-Cut Architecture
Ruei-Che Chang (University of Michigan, Ann Arbor, Michigan, United States)Seraphina Yong (University of Minnesota, Minneapolis, Minnesota, United States)Fang-Ying Liao (National Taiwan University, Taipei, Taiwan)Chih-An Tsao (National Taiwan University, Taipei, Taiwan)Bing-Yu Chen (National Taiwan University, Taipei, Taiwan)
Laser-cutting is a promising fabrication method that empowers makers, including blind or visually-impaired (BVI) creators, to create technologies that fit their needs. Existing work on laser-cut accessibility has facilitated easier assembly as a workaround for existing models. However, laser-cut models are still not designed to accommodate the needs of BVI users. Integrating BVI needs can enrich the greater maker community by enabling cross-group discourse on laser-cut making. To investigate how laser-cut model design can be more accessible overall, we study laser-cut assembly as a process deeply intertwined with the fundamental design of laser-cut models. We present a study with seven sighted and seven BVI participants to compare their usage of laser-cut model affordances during assembly. Data for the BVI participants in this study originate from a previous work. We identify assembly cues common or unique to sighted and BVI users, and discuss implications to improve general accessibility in laser-cut design.
2
Speech-Augmented Cone-of-Vision for Exploratory Data Analysis
Riccardo Bovo (Imperial College London, London, United Kingdom)Daniele Giunchi (University College London, London, United Kingdom)Ludwig Sidenmark (Lancaster University, Lancaster, United Kingdom)Joshua Newn (Lancaster University, Lancaster, Lancashire, United Kingdom)Hans Gellersen (Aarhus University, Aarhus, Denmark)Enrico Costanza (UCL Interaction Centre, London, United Kingdom)Thomas Heinis (Imperial College, London, United Kingdom)
Mutual awareness of visual attention is crucial for successful collaboration. Previous research has explored various ways to represent visual attention, such as field-of-view visualizations and cursor visualizations based on eye-tracking, but these methods have limitations. Verbal communication is often utilized as a complementary strategy to overcome such disadvantages. This paper proposes a novel method that combines verbal communication with the Cone of Vision to improve gaze inference and mutual awareness in VR. We conducted a within-group study with pairs of participants who performed a collaborative analysis of data visualizations in VR. We found that our proposed method provides a better approximation of eye gaze than the approximation provided by head direction. Furthermore, we release the first collaborative head, eyes, and verbal behaviour dataset. The results of this study provide a foundation for investigating the potential of verbal communication as a tool for enhancing visual cues for joint attention.
2
Knowing About Knowing: An Illusion of Human Competence Can Hinder Appropriate Reliance on AI Systems
Gaole He (Delft University of Technology, Delft, Netherlands)Lucie Kuiper (Delft University of Technology, Delft, Netherlands)Ujwal Gadiraju (Delft University of Technology, Delft, Netherlands)
The dazzling promises of AI systems to augment humans in various tasks hinge on whether humans can appropriately rely on them. Recent research has shown that appropriate reliance is the key to achieving complementary team performance in AI-assisted decision making. This paper addresses an under-explored problem of whether the Dunning-Kruger Effect (DKE) among people can hinder their appropriate reliance on AI systems. DKE is a metacognitive bias due to which less-competent individuals overestimate their own skill and performance. Through an empirical study ($N=249$), we explored the impact of DKE on human reliance on an AI system, and whether such effects can be mitigated using a tutorial intervention that reveals the fallibility of AI advice, and exploiting logic units-based explanations to improve user understanding of AI advice. We found that participants who overestimate their performance tend to exhibit under-reliance on AI systems, which hinders optimal team performance. Logic units-based explanations did not help users in either improving the calibration of their competence or facilitating appropriate reliance. While the tutorial intervention was highly effective in helping users calibrate their self-assessment and facilitating appropriate reliance among participants with overestimated self-assessment, we found that it can potentially hurt the appropriate reliance of participants with underestimated self-assessment. Our work has broad implications on the design of methods to tackle user cognitive biases while facilitating appropriate reliance on AI systems. Our findings advance the current understanding of the role of self-assessment in shaping trust and reliance in human-AI decision making. This lays out promising future directions for relevant HCI research in this community.
2
De-Stijl: Facilitating Graphics Design with Interactive 2D Color Palette Recommendation
Xinyu Shi (University of Waterloo, Waterloo, Ontario, Canada)Ziqi Zhou (University of Waterloo, Waterloo, Ontario, Canada)Jing Wen Zhang (University of Waterloo, Waterloo, Ontario, Canada)Ali Neshati (University of Waterloo, Waterloo, Ontario, Canada)Anjul Kumar Tyagi (Stony Brook University, Stony Brook, New York, United States)Ryan Rossi (Adobe Research, San Jose, California, United States)Shunan Guo (Adobe Research, San Jose, California, United States)Fan Du (Adobe Research, San Jose, California, United States)Jian Zhao (University of Waterloo, Waterloo, Ontario, Canada)
Selecting a proper color palette is critical in crafting a high-quality graphic design to gain visibility and communicate ideas effectively. To facilitate this process, we propose De-Stijl, an intelligent and interactive color authoring tool to assist novice designers in crafting harmonic color palettes, achieving quick design iterations, and fulfilling design constraints. Through De-Stijl, we contribute a novel 2D color palette concept that allows users to intuitively perceive color designs in context with their proportions and proximities. Further, De-Stijl implements a holistic color authoring system that supports 2D palette extraction, theme-aware and spatial-sensitive color recommendation, and automatic graphical elements (re)colorization. We evaluated De-Stijl through an in-lab user study by comparing the system with existing industry standard tools, followed by in-depth user interviews. Quantitative and qualitative results demonstrate that De-Stijl is effective in assisting novice design practitioners to quickly colorize graphic designs and easily deliver several alternatives.
2
The Ergonomic Benefits of Passive Haptics and Perceptual Manipulation for Extended Reality Interactions in Constrained Passenger Spaces
Daniel Medeiros (University of Glasgow, Glasgow, United Kingdom)Graham Wilson (University of Glasgow, Glasgow, United Kingdom)Mark McGill (University of Glasgow, Glasgow, Lanarkshire, United Kingdom)Stephen Anthony. Brewster (University of Glasgow, Glasgow, United Kingdom)
Extended Reality (XR) technology brings exciting possibilities for aeroplane passengers, allowing them to escape their limited cabin space. Using nearby physical surfaces enables a connection with the real world while improving the XR experience through touch. However, available surfaces may be located in awkward positions, reducing comfort and input performance and thus limiting their long-term use. We explore the usability of passive haptic surfaces in different orientations, assessing their effects on input performance, user experience and comfort. We then overcome ergonomic issues caused by the confined space by using perceptual manipulation techniques that remap the position and rotation of physical surfaces and user movements, assessing their effects on task workload, comfort and presence. Our results show that the challenges posed by constrained seating environments can be overcome by a combination of passive haptics and remapping the workspace with moderate translation and rotation manipulations. These manipulations allow for good input performance, low workload and comfortable interaction, opening up XR use while in transit.
2
ModSandbox: Facilitating Online Community Moderation Through Error Prediction and Improvement of Automated Rules
Jean Y. Song (DGIST, Daegu, Korea, Republic of)Sangwook Lee (KAIST, Daejeon, Korea, Republic of)Jisoo Lee (Krafton Inc. , Seoul, Korea, Republic of)Mina Kim (Kakao Corp, Pangyo, Korea, Republic of)Juho Kim (KAIST, Daejeon, Korea, Republic of)
Despite the common use of rule-based tools for online content moderation, human moderators still spend a lot of time monitoring them to ensure they work as intended. Based on surveys and interviews with Reddit moderators who use AutoModerator, we identified the main challenges in reducing false positives and false negatives of automated rules: not being able to estimate the actual effect of a rule in advance and having difficulty figuring out how the rules should be updated. To address these issues, we built ModSandbox, a novel virtual sandbox system that detects possible false positives and false negatives of a rule and visualizes which part of the rule is causing issues. We conducted a comparative, between-subject study with online content moderators to evaluate the effect of ModSandbox in improving automated rules. Results show that ModSandbox can support quickly finding possible false positives and false negatives of automated rules and guide moderators to improve them to reduce future errors.