注目の論文一覧

各カテゴリ上位30論文までを表示しています

The ACM CHI Conference on Human Factors in Computing Systems (https://chi2023.acm.org/)

15
FakeForward: Using Deepfake Technology for Feedforward Learning
Christopher Clarke (University of Bath, Bath, United Kingdom)Jingnan Xu (University of Bath, Bath, United Kingdom)Ye Zhu (University of Bath, Bath, United Kingdom)Karan Dharamshi (University of Bath, Bath, United Kingdom)Harry McGill (University of Bath, Bath, United Kingdom)Stephen Black (University of Bath, Bath, United Kingdom)Christof Lutteroth (University of Bath, Bath, United Kingdom)
Videos are commonly used to support learning of new skills, to improve existing skills, and as a source of motivation for training. Video self-modelling (VSM) is a learning technique that improves performance and motivation by showing a user a video of themselves performing a skill at a level they have not yet achieved. Traditional VSM is very data and labour intensive: a lot of video footage needs to be collected and manually edited in order to create an effective self-modelling video. We address this by presenting FakeForward -- a method which uses deepfakes to create self-modelling videos from videos of other people. FakeForward turns videos of better-performing people into effective, personalised training tools by replacing their face with the user’s. We investigate how FakeForward can be effectively applied and demonstrate its efficacy in rapidly improving performance in physical exercises, as well as confidence and perceived competence in public speaking.
13
PunchPrint: Creating Composite Fiber-Filament Craft Artifacts by Integrating Punch Needle Embroidery and 3D Printing
Ashley Del Valle (University of California Santa Barbara, Santa Barbara, California, United States)Mert Toka (University of California Santa Barbara, Santa Barbara, California, United States)Alejandro Aponte (University of California Santa Barbara, Santa Barbara, California, United States)Jennifer Jacobs (University of California Santa Barbara, Santa Barbara, California, United States)
New printing strategies have enabled 3D-printed materials that imitate traditional textiles. These filament-based textiles are easy to fabricate but lack the look and feel of fiber textiles. We seek to augment 3D-printed textiles with needlecraft to produce composite materials that integrate the programmability of additive fabrication with the richness of traditional textile craft. We present PunchPrint: a technique for integrating fiber and filament in a textile by combining punch needle embroidery and 3D printing. Using a toolpath that imitates textile weave structure, we print a flexible fabric that provides a substrate for punch needle production. We evaluate our material’s robustness through tensile strength and needle compatibility tests. We integrate our technique into a parametric design tool and produce functional artifacts that show how PunchPrint broadens punch needle craft by reducing labor in small, detailed artifacts, enabling the integration of openings and multiple yarn weights, and scaffolding soft 3D structures.
8
Imprimer: Computational Notebooks for CNC Milling
Jasper Tran O'Leary (University of Washington, Seattle, Washington, United States)Gabrielle Benabdallah (University of Washington, Seattle, Washington, United States)Nadya Peek (University of Washington, Seattle, Washington, United States)
Digital fabrication in industrial contexts involves standardized procedures that prioritize precision and repeatability. However, fabrication machines are now available for practitioners who focus instead on experimentation. In this paper, we reframe hobbyist CNC milling as writing literate programs which interleave documentation, interactive graphics, and source code for machine control. To test this approach, we present Imprimer, a machine infrastructure for a CNC mill and an associated library for a computational notebook. Imprimer lets makers learn experimentally, prototype new interactions for making, and understand physical processes by writing and debugging code. We demonstrate three experimental milling workflows as computational notebooks, conduct a user study with practitioners with a range of backgrounds, and discuss literate programming as a future vision for digital fabrication altogether.
6
3D Printable Play-Dough: New Biodegradable Materials and Creative Possibilities for Digital Fabrication
Leah Buechley (University of New Mexico, Albuquerque, New Mexico, United States)Ruby Ta (University of New Mexico, Albuquerque, New Mexico, United States)
Play-dough is a brightly-colored, easy-to-make, and familiar material. We have developed and tested custom play-dough materials that can be employed in 3D printers designed for clay. This paper introduces a set of recipes for 3D printable play-dough along with an exploration of these materials' print characteristics. We explore the design potential of play-dough as a sustainable fabrication material, highlighting its recyclability, compostability, and repairability. We demonstrate how custom-color prints can be designed and constructed and describe how play-dough can be used as a support material for clay 3D prints. We also present a set of example artifacts made from play-dough and discuss opportunities for future research.
5
Augmenting Human Cognition with an AI-Mediated Intelligent Visual Feedback
Songlin Xu (University of California, San Diego, San Diego, California, United States)Xinyu Zhang (University of California San Diego, San Diego, California, United States)
In this paper, we introduce an AI-mediated framework that can provide intelligent feedback to augment human cognition. Specifically, we leverage deep reinforcement learning (DRL) to provide adaptive time pressure feedback to improve user performance in a math arithmetic task. Time pressure feedback could either improve or deteriorate user performance by regulating user attention and anxiety. Adaptive time pressure feedback controlled by a DRL policy according to users' real-time performance could potentially solve this trade-off problem. However, the DRL training and hyperparameter tuning may require large amounts of data and iterative user studies. Therefore, we propose a dual-DRL framework that trains a regulation DRL agent to regulate user performance by interacting with another simulation DRL agent that mimics user cognition behaviors from an existing dataset. Our user study demonstrates the feasibility and effectiveness of the dual-DRL framework in augmenting user performance, in comparison to the baseline group.
5
Understanding Moderators' Conflict and Conflict Management Strategies with Streamers in Live Streaming Communities
Jie Cai (Penn State University, University Park, Pennsylvania, United States)Donghee Yvette Wohn (New Jersey Institute of Technology, Newark , New Jersey, United States)
As each micro community centered around the streamer attempts to set its own guidelines in live streaming communities, it is common for volunteer moderators (mods) and the streamer to disagree on how to handle various situations. In this study, we conducted an online survey (N=240) with live streaming mods to explore their commitment to the streamer to grow the micro community and the different styles in which they handle conflicts with the streamer. We found that 1) mods apply more active and cooperative styles than passive and assertive styles to manage conflicts, but they might be forced to do so, and 2) mods with strong commitments to the streamer would like to apply styles showing either high concerns for the streamer or low concerns for themselves. We reflect on how these results can affect micro community development and recommend designs to mitigate conflict and strengthen commitment.
4
A Human-Computer Collaborative Editing Tool for Conceptual Diagrams
Lihang Pan (Tsinghua University, Beijing, China)Chun Yu (Tsinghua University, Beijing, China)Zhe He (Tsinghua University, Beijing, Beijing, China)Yuanchun Shi (Tsinghua University, Beijing, China)
Editing (e.g., editing conceptual diagrams) is a typical office task that requires numerous tedious GUI operations, resulting in poor interaction efficiency and user experience, especially on mobile devices. In this paper, we present a new type of human-computer collaborative editing tool (CET) that enables accurate and efficient editing with little interaction effort. CET divides the task into two parts, and the human and the computer focus on their respective specialties: the human describes high-level editing goals with multimodal commands, while the computer calculates, recommends, and performs detailed operations. We conducted a formative study (N = 16) to determine the concrete task division and implemented the tool on Android devices for the specific tasks of editing concept diagrams. The user study (N = 24 + 20) showed that it increased diagram editing speed by 32.75% compared with existing state-of-the-art commercial tools and led to better editing results and user experience.
4
“Should I Follow the Human, or Follow the Robot?” — Robots in Power Can Have More Influence Than Humans on Decision-Making
Yoyo Tsung-Yu. Hou (Cornell University, Ithaca, New York, United States)Wen-Ying Lee (Cornell University, Ithaca, New York, United States)Malte F. Jung (Cornell University, Ithaca, New York, United States)
Artificially intelligent (AI) agents such as robots are increasingly delegated power in work settings, yet it remains unclear how power functions in interactions with both humans and robots, especially when they directly compete for influence. Here we present an experiment where every participant was matched with one human and one robot to perform decision-making tasks. By manipulating who has power, we created three conditions: human as leader, robot as leader, and a no-power-difference control. The results showed that the participants were significantly more influenced by the leader, regardless of whether the leader was a human or a robot. However, they generally held a more positive attitude toward the human than the robot, although they considered whichever was in power as more competent. This study illustrates the importance of power for future Human-Robot Interaction (HRI) and Human-AI Interaction (HAI) research, as it addresses pressing concerns of society about AI-powered intelligent agents.
4
CiteSee: Augmenting Citations in Scientific Papers with Persistent and Personalized Historical Context
Joseph Chee Chang (Allen Institute for AI, Seattle, Washington, United States)Amy X.. Zhang (University of Washington, Seattle, Washington, United States)Jonathan Bragg (Allen Institute for Artificial Intelligence, Seattle, Washington, United States)Andrew Head (University of Pennsylvania, Philadelphia, Pennsylvania, United States)Kyle Lo (Allen Institute for Artificial Intelligence, Seattle, Washington, United States)Doug Downey (Allen Institute for Artificial Intelligence, Seattle, Washington, United States)Daniel S. Weld (Allen Institute for Artificial Intelligence, Seattle, Washington, United States)
When reading a scholarly article, inline citations help researchers contextualize the current article and discover relevant prior work. However, it can be challenging to prioritize and make sense of the hundreds of citations encountered during literature reviews. This paper introduces CiteSee, a paper reading tool that leverages a user's publishing, reading, and saving activities to provide personalized visual augmentations and context around citations. First, CiteSee connects the current paper to familiar contexts by surfacing known citations a user had cited or opened. Second, CiteSee helps users prioritize their exploration by highlighting relevant but unknown citations based on saving and reading history. We conducted a lab study that suggests CiteSee is significantly more effective for paper discovery than three baselines. A field deployment study shows CiteSee helps participants keep track of their explorations and leads to better situational awareness and increased paper discovery via inline citation when conducting real-world literature reviews.
4
Co-Writing with Opinionated Language Models Affects Users' Views
Maurice Jakesch (Cornell University, Ithaca, New York, United States)Advait Bhat (Microsoft Research India, Bangalore, India)Daniel Buschek (University of Bayreuth, Bayreuth, Germany)Lior Zalmanson (Tel Aviv University, Tel Aviv, Tel Aviv District, Israel)Mor Naaman (Cornell Tech, New York, New York, United States)
If large language models like GPT-3 preferably produce a particular point of view, they may influence people's opinions on an unknown scale. This study investigates whether a language-model-powered writing assistant that generates some opinions more often than others impacts what users write -- and what they think. In an online experiment, we asked participants (N=1,506) to write a post discussing whether social media is good for society. Treatment group participants used a language-model-powered writing assistant configured to argue that social media is good or bad for society. Participants then completed a social media attitude survey, and independent judges (N=500) evaluated the opinions expressed in their writing. Using the opinionated language model affected the opinions expressed in participants' writing and shifted their opinions in the subsequent attitude survey. We discuss the wider implications of our results and argue that the opinions built into AI language technologies need to be monitored and engineered more carefully.
4
Feel the Force, See the Force: Exploring Visual-tactile Associations of Deformable Surfaces with Colours and Shapes
Cameron Steer (University of Bath, Bath, United Kingdom)Teodora Dinca (University of Bath, Bath, United Kingdom)Crescent Jicol (University of Bath, Bath, United Kingdom)Michael J. Proulx (University of Bath, Bath, United Kingdom)Jason Alexander (University of Bath, Bath, United Kingdom)
Deformable interfaces provide unique interaction potential for force input, for example, when users physically push into a soft display surface. However, there remains limited understanding of which visual-tactile design elements signify the presence and stiffness of such deformable force-input components. In this paper, we explore how people correspond surface stiffness to colours, graphical shapes, and physical shapes. We conducted a cross-modal correspondence (CC) study, where 30 participants associated different surface stiffnesses with colours and shapes. Our findings evidence the CCs between stiffness levels for a subset of the 2D/3D shapes and colours used in the study. We distil our findings in three design recommendations: (1) lighter colours should be used to indicate soft surfaces, and darker colours should indicate stiff surfaces; (2) rounded shapes should be used to indicate soft surfaces, while less-curved shapes should be used to indicate stiffer surfaces, and; (3) longer 2D drop-shadows should be used to indicate softer surfaces, while shorter drop-shadows should be used to indicate stiffer surfaces.
3
The Tactile Dimension: A Method for Physicalizing Touch Behaviors
Laura J. Perovich (Northeastern University, Boston, Massachusetts, United States)Bernice Rogowitz (Visual Perspectives Research , Westchester, New York, United States)Victoria Crabb (Northeastern University, Boston, Massachusetts, United States)Jack Vogelsang (Northeastern University, Boston, Massachusetts, United States)Sara Hartleben (Northeastern University, Boston, Massachusetts, United States)Dietmar Offenhuber (Northeastern University, Boston, Massachusetts, United States)
Traces of touch provide valuable insight into how we interact with the physical world. Measuring touch behavior, however, is expensive and imprecise. Utilizing a fluorescent UV tracer powder, we developed a low-cost analog method to capture persistent, high-contrast touch records on arbitrary objects. We describe our process for selecting a tracer, methods for capturing, enhancing, and aggregating traces, and approaches to examining qualitative aspects of the user experience. Three user studies demonstrate key features of this method. First, we show that it provides clear and durable traces on objects representative of scientific visualization, physicalization, and product design. Second, we demonstrate how this method could be used to study touch perception, by measuring how task and narrative framing elicit different touch behaviors on the same object. Third, we demonstrate how this method can be used to evaluate data physicalizations by observing how participants touch two different physicalizations of COVID-19 time-series data.
3
UndoPort: Exploring the Influence of Undo-Actions for Locomotion in Virtual Reality on the Efficiency, Spatial Understanding and User Experience
Florian Müller (LMU Munich, Munich, Germany)Arantxa Ye (LMU Munich, Munich, Germany)Dominik Schön (TU Darmstadt, Darmstadt, Germany)Julian Rasch (LMU Munich, Munich, Germany)
When we get lost in Virtual Reality (VR) or want to return to a previous location, we use the same methods of locomotion for the way back as for the way forward. This is time-consuming and requires additional physical orientation changes, increasing the risk of getting tangled in the headsets' cables. In this paper, we propose the use of undo actions to revert locomotion steps in VR. We explore eight different variations of undo actions as extensions of point\&teleport, based on the possibility to undo position and orientation changes together with two different visualizations of the undo step (discrete and continuous). We contribute the results of a controlled experiment with 24 participants investigating the efficiency and orientation of the undo techniques in a radial maze task. We found that the combination of position and orientation undo together with a discrete visualization resulted in the highest efficiency without increasing orientation errors.
3
“I Won't Go Speechless”: Design Exploration on a Real-Time Text-To-Speech Speaking Tool for Videoconferencing
Wooseok Kim (KAIST, Daejeon, Korea, Republic of)Jian Jun (KAIST, Daejeon, Korea, Republic of)Minha Lee (KAIST, Daejeon, Korea, Republic of)Sangsu Lee (KAIST, Daejeon, Korea, Republic of)
The COVID-19 pandemic has shifted many business activities to non-face-to-face activities, and videoconferencing has become a new paradigm. However, conference spaces isolated from surrounding interferences are not always readily available. People frequently participate in public places with unexpected crowds or acquaintances, such as cafés, living rooms, and shared offices. These environments have surrounding limitations that potentially cause challenges in speaking up during videoconferencing. To alleviate these issues and support the users in speaking-restrained spatial contexts, we propose a text-to-speech (TTS) speaking tool as a new speaking method to support active videoconferencing participation. We derived the possibility of a TTS speaking tool and investigated the empirical challenges and user expectations of a TTS speaking tool using a technology probe and participatory design methodology. Based on our findings, we discuss the need for a TTS speaking tool and suggest design considerations for its application in videoconferencing.
3
Birds of a Feather Video-Flock Together: Design and Evaluation of an Agency-Based Parrot-to-Parrot Video-Calling System for Interspecies Ethical Enrichment
Rebecca Kleinberger (Northeastern University, Boston, Massachusetts, United States)Jennifer Cunha (Parrot Kindergarten, Jupiter, Florida, United States)Megha M. Vemuri (Massachusetts Institute of Technology, Cambridge, Massachusetts, United States)Ilyena Hirskyj-Douglas (The University of Glasgow, Glasgow, United Kingdom)
Over 20 million parrots are kept as pets in the US, often lacking appropriate stimuli to meet their high social, cognitive, and emotional needs. After reviewing bird perception and agency literature, we developed an approach to allow parrots to engage in video-calling other parrots. Following a pilot experiment and expert survey, we ran a three-month study with 18 pet birds to evaluate the potential value and usability of a parrot-parrot video-calling system. We assessed the system in terms of perception, agency, engagement, and overall perceived benefits. With 147 bird-triggered calls, our results show that 1) every bird used the system, 2) most birds exhibited high motivation and intentionality, and 3) all caretakers reported perceived benefits, some arguably life-transformative, such as learning to forage or even to fly by watching others. We report on individual insights and propose considerations regarding ethics and the potential of parrot video-calling for enrichment.
3
ChameleonControl: Teleoperating Real Human Surrogates through Mixed Reality Gestural Guidance for Remote Hands-on Classrooms
Mehrad Faridan (University of Calgary, Calgary, Alberta, Canada)Bheesha Kumari (University of Calgary, Calgary, Alberta, Canada)Ryo Suzuki (University of Calgary, Calgary, Alberta, Canada)
We present ChameleonControl, a real-human teleoperation system for scalable remote instruction in hands-on classrooms. In contrast to existing video or AR/VR-based remote hands-on education, ChameleonControl uses a real human as a surrogate of a remote instructor. Building on existing human-based telepresence approaches, we contribute a novel method to teleoperate a human surrogate through synchronized mixed reality hand gestural navigation and verbal communication. By overlaying the remote instructor's virtual hands in the local user's MR view, the remote instructor can guide and control the local user as if they were physically present. This allows the local user/surrogate to synchronize their hand movements and gestures with the remote instructor, effectively teleoperating a real human. We deploy and evaluate our system in classrooms of physiotherapy training, as well as other application domains such as mechanical assembly, sign language and cooking lessons. The study results confirm that our approach can increase engagement and the sense of co-presence, showing potential for the future of remote hands-on classrooms.
3
Libraries of Things: Understanding the Challenges of Sharing Tangible Collections and the Opportunities for HCI
Lee Jones (Queen's University, Kingston, Ontario, Canada)Alaa Nousir (Queen's University, Kingston, Ontario, Canada)Tom Everrett (Ingenium - Canada's Museums of Science and Innovation, Ottawa, Ontario, Canada)Sara Nabil (Queen's University, Kingston, Ontario, Canada)
“Libraries of Things” are tangible collections of borrowable objects. There are many benefits to Libraries of Things such as making objects and skill-building accessible, reducing waste through the sharing of items, and saving costs associated with purchasing rarely-used items. We introduce the first HCI study of Library of Things by interviewing 23 librarians who run a variety of collections such as handheld tools, gear, and musical instruments – within public institutions and more grass-roots efforts in the private sector. In our findings, we discuss the challenges these collections experience in changing behavioural patterns from buying to borrowing, helping individuals `try new things', iterating to find sharable items, training staff, and manual intervention throughout the borrowing cycle. We present 5 opportunities for HCI research to support interactive skill-sharing, self-borrowing, maintenance recognition and cataloguing `things', organizing non-uniform inventories, and creating public-awareness. Further in-the-wild studies should also consider the tensions between the values of these organizations and low-cost convenient usage.
3
Subjective Probability Correction for Uncertainty Representations
Fumeng Yang (Northwestern University, Evanston, Illinois, United States)Maryam Hedayati (Northwestern University, Evanston, Illinois, United States)Matthew Kay (Northwestern University, Chicago, Illinois, United States)
We propose a new approach to uncertainty communication: we keep the uncertainty representation fixed, but adjust the distribution displayed to compensate for biases in people’s subjective probability in decision-making. To do so, we adopt a linear-in-probit model of subjective probability and derive two corrections to a Normal distribution based on the model’s intercept and slope: one correcting all right-tailed probabilities, and the other preserving the mode and one focal probability. We then conduct two experiments on U.S. demographically-representative samples. We show participants hypothetical U.S. Senate election forecasts as text or a histogram and elicit their subjective probabilities using a betting task. The first experiment estimates the linear-in-probit intercepts and slopes, and confirms the biases in participants’ subjective probabilities. The second, preregistered follow-up shows participants the bias-corrected forecast distributions. We find the corrections substantially improve participants’ decision quality by reducing the integrated absolute error of their subjective probabilities compared to the true probabilities. These corrections can be generalized to any univariate probability or confidence distribution, giving them broad applicability. Our preprint, code, data, and preregistration are available at https://doi.org/10.17605/osf.io/kcwxm.
3
UEyes: Understanding Visual Saliency across User Interface Types
Yue Jiang (Aalto University, Espoo, Finland)Luis A.. Leiva (University of Luxembourg, Esch-sur-Alzette, Luxembourg)Hamed Rezazadegan Tavakoli (Nokia Technologies, Espoo, Finland)Paul R. B. Houssel (University of Luxembourg, Esch-sur-Alzette, Luxembourg)Julia Kylmälä (Aalto University, Espoo, Finland)Antti Oulasvirta (Aalto University, Helsinki, Finland)
While user interfaces (UIs) display elements such as images and text in a grid-based layout, UI types differ significantly in the number of elements and how they are displayed. For example, webpage designs rely heavily on images and text, whereas desktop UIs tend to feature numerous small images. To examine how such differences affect the way users look at UIs, we collected and analyzed a large eye-tracking-based dataset, \textit{UEyes} (62 participants and 1,980 UI screenshots), covering four major UI types: webpage, desktop UI, mobile UI, and poster. We analyze its differences in biases related to such factors as color, location, and gaze direction. We also compare state-of-the-art predictive models and propose improvements for better capturing typical tendencies across UI types. Both the dataset and the models are publicly available.
3
Exploring Challenges and Opportunities to Support Designers in Learning to Co-create with AI-based Manufacturing Design Tools
Frederic Gmeiner (Carnegie Mellon University, Pittsburgh, Pennsylvania, United States)Humphrey Yang (Carnegie Mellon University, Pittsburgh, Pennsylvania, United States)Lining Yao (Carnegie Mellon University, Pittsburgh, Pennsylvania, United States)Kenneth Holstein (Carnegie Mellon University, Pittsburgh, Pennsylvania, United States)Nikolas Martelaro (Carnegie Mellon University, Pittsburgh, Pennsylvania, United States)
AI-based design tools are proliferating in professional software to assist engineering and industrial designers in complex manufacturing and design tasks. These tools take on more agentic roles than traditional computer-aided design tools and are often portrayed as “co-creators.” Yet, working effectively with such systems requires different skills than working with complex CAD tools alone. To date, we know little about how engineering designers learn to work with AI-based design tools. In this study, we observed trained designers as they learned to work with two AI-based tools on a realistic design task. We find that designers face many challenges in learning to effectively co-create with current systems, including challenges in understanding and adjusting AI outputs and in communicating their design goals. Based on our findings, we highlight several design opportunities to better support designer-AI co-creation.
3
Can Voice Assistants Be Microaggressors? Cross-Race Psychological Responses to Failures of Automatic Speech Recognition
Kimi Wenzel (Carnegie Mellon University, Pittsburgh, Pennsylvania, United States)Nitya Devireddy (Carnegie Mellon University, Pittsburgh, Pennsylvania, United States)Cam Davison (Carnegie Mellon University, Pittsburgh, Pennsylvania, United States)Geoff Kaufman (Carnegie Mellon University, Pittsburgh, Pennsylvania, United States)
Language technologies have a racial bias, committing greater errors for Black users than for white users. However, little work has evaluated what effect these disparate error rates have on users themselves. The present study aims to understand if speech recognition errors in human-computer interactions may mirror the same effects as misunderstandings in interpersonal cross-race communication. In a controlled experiment (N=108), we randomly assigned Black and white participants to interact with a voice assistant pre-programmed to exhibit a high versus low error rate. Results revealed that Black participants in the high error rate condition, compared to Black participants in the low error rate condition, exhibited significantly higher levels of self-consciousness, lower levels of self-esteem and positive affect, and less favorable ratings of the technology. White participants did not exhibit this disparate pattern. We discuss design implications and the diverse research directions to which this initial study aims to contribute.
3
The Walking Talking Stick: Understanding Automated Note-Taking in Walking Meetings
Luke Haliburton (LMU Munich, Munich, Germany)Natalia Bartłomiejczyk (Lodz University of Technology, Lodz, Poland)Albrecht Schmidt (LMU Munich, Munich, Germany)Paweł W. Woźniak (Chalmers University of Technology, Gothenburg, Sweden)Jasmin Niess (University of St. Gallen, St. Gallen, Switzerland)
While walking meetings offer a healthy alternative to sit-down meetings, they also pose practical challenges. Taking notes is difficult while walking, which limits the potential of walking meetings. To address this, we designed the Walking Talking Stick---a tangible device with integrated voice recording, transcription, and a physical highlighting button to facilitate note-taking during walking meetings. We investigated our system in a three-condition between-subjects user study with thirty pairs of participants (N=60) who conducted 15-minute outdoor walking meetings. Participants either used clip-on microphones, the prototype without the button, or the prototype with the highlighting button. We found that the tangible device increased task focus, and the physical highlighting button facilitated turn-taking and resulted in more useful notes. Our work demonstrates how interactive artifacts can incentivize users to hold meetings in motion and enhance conversation dynamics. We contribute insights for future systems which support conducting work tasks in mobile environments
3
User-Driven Constraints for Layout Optimisation in Augmented Reality
Aziz Niyazov (IRIT - University of Toulouse, Toulouse, France)Barrett Ens (Monash University, Melbourne, Australia)Kadek Ananta Satriadi (Monash University, Melbourne, Australia)Nicolas Mellado (CNRS, Toulouse, France)Loic Barthe (IRIT - University of Toulouse, Toulouse, France)Tim Dwyer (Monash University, Melbourne, VIC, Australia)Marcos Serrano (IRIT - Elipse, Toulouse, France)
Automatic layout optimisation allows users to arrange augmented reality content in the real-world environment without the need for tedious manual interactions. This optimisation is often based on modelling the intended content placement as constraints, defined as cost functions. Then, applying a cost minimization algorithm leads to a desirable placement. However, such an approach is limited by the lack of user control over the optimisation results. In this paper we explore the concept of user-driven constraints for augmented reality layout optimisation. With our approach users can define and set up their own constraints directly within the real-world environment. We first present a design space composed of three dimensions: the constraints, the regions of interest and the constraint parameters. Then we explore which input gestures can be employed to define the user-driven constraints of our design space through a user elicitation study. Using the results of the study, we propose a holistic system design and implementation demonstrating our user-driven constraints, which we evaluate in a final user study where participants had to create several constraints at the same time to arrange a set of virtual contents.
3
GestureExplorer: Immersive Visualisation and Exploration of Gesture Data
Ang Li (Monash University , Melbounre, Australia)Jiazhou Liu (Monash University, Melbourne, VIC, Australia)Maxime Cordeil (The University Of Queensland, Brisbane, Australia)Jack Topliss (University of Canterbury, Christchurch, Canterbury, New Zealand)Thammathip Piumsomboon (University of Canterbury, Christchurch, Canterbury, New Zealand)Barrett Ens (Monash University, Melbourne, Australia)
This paper presents the design and evaluation of GestureExplorer, an Immersive Analytics tool that supports the interactive exploration, classification and sensemaking with large sets of 3D temporal gesture data. GestureExplorer features 3D skeletal and trajectory visualisations of gestures combined with abstract visualisations of clustered sets of gestures. By leveraging the large immersive space afforded by a Virtual Reality interface our tool allows free navigation and control of viewing perspective for users to gain a better understanding of gestures. We explored a selection of classification methods to provide an overview of the dataset that was linked to a detailed view of the data that showed different visualisation modalities. We evaluated GestureExplorer with two user studies and collected feedback from participants with diverse visualisation and analytics backgrounds. Our results demonstrated the promising capability of GestureExplorer for providing a useful and engaging experience in exploring and analysing gesture data.
3
What is Human-Centered about Human-Centered AI? A Map of the Research Landscape
Tara Capel (Queensland University of Technology, Brisbane, QLD, Australia)Margot Brereton (QUT, Brisbane, Brisbane, Australia)
The application of Artificial Intelligence (AI) across a wide range of domains comes with both high expectations of its benefits and dire predictions of misuse. While AI systems have largely been driven by a technology-centered design approach, the potential societal consequences of AI have mobilized both HCI and AI researchers towards researching human-centered artificial intelligence (HCAI). However, there remains considerable ambiguity about what it means to frame, design and evaluate HCAI. This paper presents a critical review of the large corpus of peer-reviewed literature emerging on HCAI in order to characterize what the community is defining as HCAI. Our review contributes an overview and map of HCAI research based on work that explicitly mentions the terms ‘human-centered artificial intelligence’ or ‘human-centered machine learning’ or their variations, and suggests future challenges and research directions. The map reveals the breadth of research happening in HCAI, established clusters and the emerging areas of Interaction with AI and Ethical AI. The paper contributes a new definition of HCAI, and calls for greater collaboration between AI and HCI research, and new HCAI constructs.
3
Bias-Aware Systems: Exploring Indicators for the Occurrences of Cognitive Biases when Facing Different Opinions
Nattapat Boonprakong (University of Melbourne, Parkville, Victoria, Australia)Xiuge Chen (The University of Melbourne, Melbourne, Victoria, Australia)Catherine Davey (University of Melbourne, Parkville, Victoria, Australia)Benjamin Tag (University of Melbourne, Melbourne, Victoria, Australia)Tilman Dingler (University of Melbourne, Melbourne, Victoria, Australia)
Cognitive biases have been shown to play a critical role in creating echo chambers and spreading misinformation. They undermine our ability to evaluate information and can influence our behaviour without our awareness. To allow the study of occurrences and effects of biases on information consumption behaviour, we explore indicators for cognitive biases in physiological and interaction data. Therefore, we conducted two experiments investigating how people experience statements that are congruent or divergent from their own ideological stance. We collected interaction data, eye tracking data, hemodynamic responses, and electrodermal activity while participants were exposed to ideologically tainted statements. Our results indicate that people spend more time processing statements that are incongruent with their own opinion. We detected differences in blood oxygenation levels between congruent and divergent opinions, a first step towards building systems to detect and quantify cognitive biases.
3
Short-Form Videos Degrade Our Capacity to Retain Intentions: Effect of Context Switching On Prospective Memory
Francesco Chiossi (LMU Munich, Munich, Germany)Luke Haliburton (LMU Munich, Munich, Germany)Changkun Ou (LMU Munich, Munich, Germany)Andreas Martin. Butz (LMU Munich, Munich, Germany)Albrecht Schmidt (LMU Munich, Munich, Germany)
Social media platforms use short, highly engaging videos to catch users' attention. While the short-form video feeds popularized by TikTok are rapidly spreading to other platforms, we do not yet understand their impact on cognitive functions. We conducted a between-subjects experiment (N=60) investigating the impact of engaging with TikTok, Twitter, and YouTube while performing a Prospective Memory task (i.e., executing a previously planned action). The study required participants to remember intentions over interruptions. We found that the TikTok condition significantly degraded the users’ performance in this task. As none of the other conditions (Twitter, YouTube, no activity) had a similar effect, our results indicate that the combination of short videos and rapid context-switching impairs intention recall and execution. We contribute a quantified understanding of the effect of social media feed format on Prospective Memory and outline consequences for media technology designers to not harm the users’ memory and wellbeing.
3
Smartphone-derived Virtual Keyboard Dynamics Coupled with Accelerometer Data as a Window into Understanding Brain Health
Emma Ning (University of Illinois at Chicago, Chicago, Illinois, United States)Andrea T. Cladek (University of Illinois at Chicago, Chicago, Illinois, United States)Mindy K. Ross (University of Illinois at Chicago, Chicago, Illinois, United States)Sarah Kabir (University of Illinois at Chicago, Chicago, Illinois, United States)Amruta Barve (University of Illinois at Chicago, Chicago, Illinois, United States)Ellyn Kennelly (Wayne State University, Detroit, Michigan, United States)Faraz Hussain (University of Illinois at Chicago, Chicago, Illinois, United States)Jennifer Duffecy (University of Illinois at Chicago, Chicago, Illinois, United States)Scott Langenecker (University of Utah, Salt Lake City, Utah, United States)Theresa Nguyen (University of Illinois at Chicago, Chicago, Illinois, United States)Theja Tulabandhula (University of Illinois at Chicago, Chicago, Illinois, United States)John Zulueta (University of Illinois at Chicago, Chicago, Illinois, United States)Olusola A. Ajilore (University of Illinois, Chicago (UIC), Chicago, Illinois, United States)Alexander P. Demos (University of Illinois at Chicago, Chicago, Illinois, United States)Alex Leow (University of Illinois, Chicago (UIC), Chicago, Illinois, United States)
We examine the feasibility of using accelerometer data exclusively collected during typing on a custom smartphone keyboard to study whether typing dynamics are associated with daily variations in mood and cognition. As part of an ongoing digital mental health study involving mood disorders, we collected data from a well-characterized clinical sample (N = 85) and classified accelerometer data per typing session into orientation (upright vs. not) and motion (active vs. not). The mood disorder group showed lower cognitive performance despite mild symptoms (depression/mania). There were also diurnal pattern differences with respect to cognitive performance: individuals with higher cognitive performance typed faster and were less sensitive to time of day. They also exhibited more well-defined diurnal patterns in smartphone keyboard usage: they engaged with the keyboard more during the day and tapered their usage more at night compared to those with lower cognitive performance, suggesting a healthier usage of their phone.
3
The Intricacies of Social Robots: Secondary Analysis of Fictional Documentaries to Explore the Benefits and Challenges of Robots in Complex Social Settings
Judith Dörrenbächer (University of Siegen, Siegen, Germany)Ronda Ringfort-Felner (University of Siegen, Siegen, Germany)Marc Hassenzahl (University of Siegen, Siegen, Germany)
In the design of social robots, the focus is often on the robot itself rather than on the intricacies of possible application scenarios. In this paper, we examine eight fictional documentaries about social robots, such as SEYNO, a robot that promotes respect between passengers in trains, or PATO, a robot to watch movies with. Overall, robots were conceptualized either (1) to substitute humans in relationships or (2) to mediate relationships (human-human-robot-interaction). While the former is basis of many current approaches to social robotics, the latter is less common, but particularly interesting. For instance, the mediation perspective fundamentally impacts the role a robot takes (e.g., role model, black sheep, ally, opponent, moralizer) and thus its potential function and form. From the substitution perspective, robots are expected to mimic human emotions; from the mediation perspective, robots can be positive precisely because they remain objective and are neither emotional nor empathic.
2
How does HCI Understand Human Agency and Autonomy?
Dan Bennett (University of Bristol, Bristol, Bristol, United Kingdom)Oussama Metatla (University of Bristol, Bristol, United Kingdom)Anne Roudaut (University of Bristol, Bristol, United Kingdom)Elisa D.. Mekler (Aalto University, Espoo, Finland)
Human agency and autonomy have always been fundamental concepts in HCI. New developments, including ubiquitous AI and the growing integration of technologies into our lives, make these issues ever pressing, as technologies increase their ability to influence our behaviours and values. However, in HCI understandings of autonomy and agency remain ambiguous. Both concepts are used to describe a wide range of phenomena pertaining to sense-of-control, material independence, and identity. It is unclear to what degree these understandings are compatible, and how they support the development of research programs and practical interventions. We address this by reviewing 30 years of HCI research on autonomy and agency to identify current understandings, open issues, and future directions. From this analysis, we identify ethical issues, and outline key themes to guide future work. We also articulate avenues for advancing clarity and specificity around these concepts, and for coordinating integrative work across different HCI communities.
2
Felt Experiences with Kombucha Scoby: Exploring First-person Perspectives with Living Matter
Netta Ofer (University of Colorado, Boulder, Boulder, Colorado, United States)Mirela Alistar (University of Colorado Boulder, Boulder, Colorado, United States)
Designing with living organisms can offer new perspectives to design research and practices in HCI. In this work, we explore first-person perspectives through design research with Kombucha Scoby, a microbial biofilm. We began with a material design exploration, producing digitally fabricated and crafted samples with Scoby. As we noticed our felt experiences while growing and working with Kombucha Scoby, we shifted towards a reflective autoethnographic study. Through reflective writings, we followed sensory experiences such as hearing the Kombucha fermentation, touching the Scoby while harvesting it, and watching the slow growth of layers over time. Subsequently, we designed "sensory engagement probes”: designed experiments that bring forward new connections and communicate our process, motivations, and tensions that emerged while engaging with the organism. Lastly, we discuss how such design research can inform material design with living matter by creating space to contemplate "life as shared experience" and more-than-human design perspectives.
2
"Information-Backward but Sex-Forward'': Navigating Masculinity towards Intimate Wellbeing and Heterosexual Relationships
Anupriya Tuli (IIIT-Delhi, New Delhi, Delhi, India)Azra Ismail (Georgia Institute of Technology, Atlanta, Georgia, United States)Karthik S Bhat (Georgia Institute of Technology, Atlanta, Georgia, United States)Pushpendra Singh (IIIT-Delhi, New Delhi, India)Neha Kumar (Georgia Tech, Atlanta, Georgia, United States)
There has been a growing interest in reproductive health and intimate wellbeing in Human-Computer Interaction, increasingly from an ecological perspective. Much of this work is centered around women's experiences across diverse settings, emphasizing men's limited engagement and need for greater participation on these topics. Our research responds to this gap by investigating cisgender men's experiences of cultivating sexual health literacies in an urban Indian context. We leverage media probes to stimulate focus group discussions, using popular media references on men's fertility to elicit shared reflection. Our findings uncover the role that humor and masculinity play in shaping men's perceptions of their sexual health and how this influences their sense of agency and participation in heterosexual intimate relationships. We further discuss how technologies might be designed to support men's participation in these relationships as supportive partners and allies.
2
What Do We Mean When We Talk about Trust in Social Media? A Systematic Review
Yixuan Zhang (Georgia Institute of Technology, Atlanta, Georgia, United States)Joseph D. Gaggiano (Georgia Institute of Technology , Atlanta, Georgia, United States)Nutchanon Yongsatianchot (Northeastern University, Boston, Massachusetts, United States)Nurul M. Suhaimi (Universiti Malaysia Pahang, Pahang, Malaysia)Miso Kim (Northeastern University, Boston, Massachusetts, United States)Yifan Sun (William & Mary, Williamsburg, Virginia, United States)Jacqueline Griffin (Northeastern University, Boston, Massachusetts, United States)Andrea G. Parker (Georgia Tech, Atlanta, Georgia, United States)
Do people trust social media? If so, why, in what contexts, and how does that trust impact their lives? Researchers, companies, and journalists alike have increasingly investigated these questions, which are fundamental to understanding social media interactions and their implications for society. However, trust in social media is a complex concept, and there is conflicting evidence about the antecedents and implications of trusting social media content, users, and platforms. More problematic is that we lack basic agreement as to what trust means in the context of social media. Addressing these challenges, we conducted a systematic review to identify themes and challenges in this field. Through our analysis of 70 papers, we contribute a synthesis of how trust in social media is defined, conceptualized, and measured, a summary of trust antecedents in social media, an understanding of how trust in social media impacts behaviors and attitudes, and directions for future work.
2
fAIlureNotes: Supporting Designers in Understanding the Limits of AI Models for Computer Vision Tasks
Steven Moore (Technical University Munich (TUM), Munich, Germany)Q. Vera Liao (Microsoft Research, Montreal, Quebec, Canada)Hariharan Subramonyam (Stanford University, Stanford, California, United States)
To design with AI models, user experience (UX) designers must assess the fit between the model and user needs. Based on user research, they need to contextualize the model's behavior and potential failures within their product-specific data instances and user scenarios. However, our formative interviews with ten UX professionals revealed that such a proactive discovery of model limitations is challenging and time-intensive. Furthermore, designers often lack technical knowledge of AI and accessible exploration tools, which challenges their understanding of model capabilities and limitations. In this work, we introduced a \textit{failure-driven design} approach to AI, a workflow that encourages designers to explore model behavior and failure patterns early in the design process. The implementation of \system, a designer-centered failure exploration and analysis tool, supports designers in evaluating models and identifying failures across diverse user groups and scenarios. Our evaluation with UX practitioners shows that \system outperforms today's interactive model cards in assessing context-specific model performance.
2
Predicting Gaze-based Target Selection in Augmented Reality Headsets based on Eye and Head Endpoint Distributions
Yushi Wei (Xi'an Jiaotong-Liverpool University, Suzhou, China)Rongkai Shi (Xi'an Jiaotong-Liverpool University, Suzhou, Jiangsu, China)Difeng Yu (University of Melbourne, Melbourne, Victoria, Australia)Yihong Wang (Xi'an Jiaotong-Liverpool University, Suzhou, China)Yue Li (Xi'an Jiaotong-Liverpool University, Suzhou, Jiangsu, China)Lingyun Yu (Xi'an Jiaotong-Liverpool University, Suzhou, Jiangsu, China)Hai-Ning Liang (Xi'an Jiaotong-Liverpool University, Suzhou, Jiangsu, China)
Target selection is a fundamental task in interactive Augmented Reality (AR) systems. Predicting the intended target of selection in such systems can provide users with a smooth, low-friction interaction experience. Our work aims to predict gaze-based target selection in AR headsets with eye and head endpoint distributions, which describe the probability distribution of eye and head 3D orientation when a user triggers a selection input. We first conducted a user study to collect users’ eye and head behavior in a gaze-based pointing selection task with two confirmation mechanisms (air tap and blinking). Based on the study results, we then built two models: a unimodal model using only eye endpoints and a multimodal model using both eye and head endpoints. Results from a second user study showed that the pointing accuracy is improved by approximately 32% after integrating our models into gaze-based selection techniques.
2
Using Pseudo-Stiffness to Enrich the Haptic Experience in Virtual Reality
Yannick Weiss (LMU Munich, Munich, Germany)Steeven Villa (LMU Munich, Munich, Germany)Albrecht Schmidt (LMU Munich, Munich, Germany)Sven Mayer (LMU Munich, Munich, Germany)Florian Müller (LMU Munich, Munich, Germany)
Providing users with a haptic sensation of the hardness and softness of objects in virtual reality is an open challenge. While physical props and haptic devices help, their haptic properties do not allow for dynamic adjustments. To overcome this limitation, we present a novel technique for changing the perceived stiffness of objects based on a visuo-haptic illusion. We achieved this by manipulating the hands' Control-to-Display (C/D) ratio in virtual reality while pressing down on an object with fixed stiffness. In the first study (N=12), we determine the detection thresholds of the illusion. Our results show that we can exploit a C/D ratio from 0.7 to 3.5 without user detection. In the second study (N=12), we analyze the illusion's impact on the perceived stiffness. Our results show that participants perceive the objects to be up to 28.1% softer and 8.9% stiffer, allowing for various haptic applications in virtual reality.
2
The Ergonomic Benefits of Passive Haptics and Perceptual Manipulation for Extended Reality Interactions in Constrained Passenger Spaces
Daniel Medeiros (University of Glasgow, Glasgow, United Kingdom)Graham Wilson (University of Glasgow, Glasgow, United Kingdom)Mark McGill (University of Glasgow, Glasgow, Lanarkshire, United Kingdom)Stephen Anthony. Brewster (University of Glasgow, Glasgow, United Kingdom)
Extended Reality (XR) technology brings exciting possibilities for aeroplane passengers, allowing them to escape their limited cabin space. Using nearby physical surfaces enables a connection with the real world while improving the XR experience through touch. However, available surfaces may be located in awkward positions, reducing comfort and input performance and thus limiting their long-term use. We explore the usability of passive haptic surfaces in different orientations, assessing their effects on input performance, user experience and comfort. We then overcome ergonomic issues caused by the confined space by using perceptual manipulation techniques that remap the position and rotation of physical surfaces and user movements, assessing their effects on task workload, comfort and presence. Our results show that the challenges posed by constrained seating environments can be overcome by a combination of passive haptics and remapping the workspace with moderate translation and rotation manipulations. These manipulations allow for good input performance, low workload and comfortable interaction, opening up XR use while in transit.
2
ProxSituated Visualization: An Extended Model of Situated Visualization using Proxies for Physical Referents
Kadek Ananta Satriadi (University of South Australia, Adelaide, Australia)Andrew Cunningham (University of South Australia, Adelaide, Australia)Ross T. Smith (University of South Australia, Adelaide, Australia)Tim Dwyer (Monash University, Melbourne, Australia)Adam Mark. Drogemuller (University of South Australia, Adelaide, Australia)Bruce H. Thomas (University of South Australia, Mawson Lakes, South Australia, Australia)
Existing situated visualization models assume the user is able to directly interact with the objects and spaces to which the data refers (known as physical referents). We review a growing body of work exploring scenarios where the user interacts with a proxy representation of the physical referent rather than immediately with the object itself. This introduces a complex mixture of immediate situatedness and proxies of situatedness that goes beyond the expressiveness of current models. We propose an extended model of situated visualization that encompasses Immediate Situated Visualization and ProxSituated (Proxy of Situated) Visualization. Our model describes a set of key entities involved in proxSituated scenarios and important relationships between them. From this model, we derive design dimensions and apply them to existing situated visualization work. The resulting design space allows us to describe and evaluate existing scenarios, as well as to creatively generate new conceptual scenarios.
2
De-Stijl: Facilitating Graphics Design with Interactive 2D Color Palette Recommendation
Xinyu Shi (University of Waterloo, Waterloo, Ontario, Canada)Ziqi Zhou (University of Waterloo, Waterloo, Ontario, Canada)Jing Wen Zhang (University of Waterloo, Waterloo, Ontario, Canada)Ali Neshati (University of Waterloo, Waterloo, Ontario, Canada)Anjul Kumar Tyagi (Stony Brook University, Stony Brook, New York, United States)Ryan Rossi (Adobe Research, San Jose, California, United States)Shunan Guo (Adobe Research, San Jose, California, United States)Fan Du (Adobe Research, San Jose, California, United States)Jian Zhao (University of Waterloo, Waterloo, Ontario, Canada)
Selecting a proper color palette is critical in crafting a high-quality graphic design to gain visibility and communicate ideas effectively. To facilitate this process, we propose De-Stijl, an intelligent and interactive color authoring tool to assist novice designers in crafting harmonic color palettes, achieving quick design iterations, and fulfilling design constraints. Through De-Stijl, we contribute a novel 2D color palette concept that allows users to intuitively perceive color designs in context with their proportions and proximities. Further, De-Stijl implements a holistic color authoring system that supports 2D palette extraction, theme-aware and spatial-sensitive color recommendation, and automatic graphical elements (re)colorization. We evaluated De-Stijl through an in-lab user study by comparing the system with existing industry standard tools, followed by in-depth user interviews. Quantitative and qualitative results demonstrate that De-Stijl is effective in assisting novice design practitioners to quickly colorize graphic designs and easily deliver several alternatives.
2
“It can bring you in the right direction”: Episode-Driven Data Narratives to Help Patients Navigate Multidimensional Diabetes Data to Make Care Decisions
Shriti Raj (University of Michigan, Ann Arbor, Michigan, United States)Toshi Gupta (University of Michigan, Ann Arbor, Michigan, United States)Joyce Lee (University of Michigan, Ann Arbor, Michigan, United States)Matthew Kay (Northwestern University, Chicago, Illinois, United States)Mark W. Newman (U. of Michigan, Ann Arbor, Michigan, United States)
Engaging with multiple streams of personal health data to inform self-care of chronic health conditions remains a challenge. Existing informatics tools provide limited support for patients to make data actionable. To design better tools, we conducted two studies with Type 1 diabetes patients and their clinicians. In the first study, we observed data review sessions between patients and clinicians to articulate the tasks involved in assessing different types of data from diabetes devices to make care decisions. Drawing upon these tasks, we designed novel data interfaces called episode-driven data narratives and performed a task-driven evaluation. We found that as compared to the commercially available diabetes data reports, episode-driven data narratives improved engagement and decision-making with data. We discuss implications for designing data interfaces to support interaction with multidimensional health data to inform self-care.
2
Exploring Co-located Interactions with a Shape-Changing Bar Chart
Miriam Sturdee (Lancaster University, Lancaster, United Kingdom)Hayat Kara (Lancaster University, Lancaster, United Kingdom)Jason Alexander (University of Bath, Bath, United Kingdom)
Data-physicalizations encode data and meaning through geometry or material properties, providing a non-planar view of data, offering novel opportunities for interrogation, discovery and presentation. This field has explored how single users interact with complex 3D data, but the challenges in the application of this technology to collaborative situations have not been addressed. We describe a study exploring interactions and preferences among co-located individuals using a dynamic data-physicalization in the form of a shape-changing bar chart, and compare this to previous work with single participants. Results suggest that co-located interactions with physical data prompt non-interactive hand gestures, a mirroring of physicalizations, and novel hand gestures in comparison to single participant studies. We also note that behavioural similarities in participants between interactive tabletop studies and data-physicalizations may be capitalised upon for further development of these dynamic representations. Finally, we consider the implications and challenges for the adoption of these types of platforms.
2
AngleKindling: Supporting Journalistic Angle Ideation with Large Language Models
Savvas Petridis (Columbia University, New York, New York, United States)Nicholas Diakopoulos (Northwestern University, Evanston, Illinois, United States)Kevin Crowston (Syracuse University, Syracuse, New York, United States)Mark Hansen (Columbia University, New York, New York, United States)Keren Henderson (Syracuse University, Syracuse, New York, United States)Stan Jastrzebski (Syracuse University, Syracuse, New York, United States)Jeffrey V. Nickerson (Stevens Institute of Technology, Hoboken, New Jersey, United States)Lydia B. Chilton (Columbia University, New York, New York, United States)
News media often leverage documents to find ideas for stories, while being critical of the frames and narratives present. Developing angles from a document such as a press release is a cognitively taxing process, in which journalists critically examine the implicit meaning of its claims. Informed by interviews with journalists, we developed AngleKindling, an interactive tool which employs the common sense reasoning of large language models to help journalists explore angles for reporting on a press release. In a study with 12 professional journalists, we show that participants found AngleKindling significantly more helpful and less mentally demanding to use for brainstorming ideas, compared to a prior journalistic angle ideation tool. AngleKindling helped journalists deeply engage with the press release and recognize angles that were useful for multiple types of stories. From our findings, we discuss how to help journalists customize and identify promising angles, and extending AngleKindling to other knowledge-work domains.
2
Not all spacings are created equal: The Effect of Text Spacings in On-the-go Reading Using Optical See-Through Head-Mounted Displays
Chen Zhou (National University of Singapore, Singapore, Singapore)Katherine Fennedy (National University of Singapore, Singapore, Singapore)Felicia Fang-Yi. Tan (National University of Singapore, Singapore, Singapore)Shengdong Zhao (National University of Singapore, Singapore, Singapore)Yurui Shao (National University of Singapore , Singapore, Singapore , Singapore)
The emergent Optical Head-Mounted Display (OHMD) platform has made mobile reading possible by superimposing digital text onto users’ view of the environment. However, mobile reading through OHMD needs to be effectively balanced with the user's environmental awareness. Hence, a series of studies were conducted to explore how text spacing strategies facilitate such balance. Through these studies, it was found that increasing spacing within the text can significantly enhance mobile reading on OHMDs in both simple and complex navigation scenarios and that such benefits mainly come from increasing the inter-line spacing, but not inter-word spacing. Compared with existing positioning strategies, increasing inter-line spacing improves mobile OHMD information reading in terms of reading speed (11.9% faster), walking speed (3.7% faster), and switching between reading and navigation (106.8% more accurate and 33% faster).
2
How Instructional Data Physicalization Fosters Reflection in Personal Informatics
Marit Bentvelzen (Utrecht University, Utrecht, Netherlands)Julia Dominiak (Lodz University of Technology, Łódź, Poland)Jasmin Niess (University of St. Gallen, St. Gallen, Switzerland)Frederique Henraat (Utrecht University, Utrecht, Netherlands)Paweł W. Woźniak (Chalmers University of Technology, Gothenburg, Sweden)
The ever-increasing number of devices quantifying our lives offers a perspective of high awareness of one's wellbeing, yet it remains a challenge for personal informatics (PI) to effectively support data-based reflection. Effective reflection is recognised as a key factor for PI technologies to foster wellbeing. Here, we investigate whether building tangible representations of health data can offer engaging and reflective experiences. We conducted a between-subjects study where n=60 participants explored their immediate blood pressure data in relation to medical norms. They either used a standard mobile app, built a data representation from LEGO bricks based on instructions, or completed a free-form brick build. We found that building with instructions fostered more comparison and using bricks fostered focused attention. The free-form condition required extra time to complete, and lacked usability. Our work shows that designing instructional physicalisation experiences for PI is a means of improving engagement and understanding of personal data.
2
Investigating Eyes-away Mid-air Typing in Virtual Reality using Squeeze haptics-based Postural Reinforcement
Aakar Gupta (Meta Inc, Redmond, Washington, United States)Naveen Sendhilnathan (Meta Inc, Redmond, Washington, United States)Jess Hartcher-O'Brien (Meta Inc, Redmond, Washington, United States)Evan Pezent (Meta Inc, Redmond, Washington, United States)Hrvoje Benko (Meta, Redmond, Washington, United States)Tanya R.. Jonker (Meta Inc, Redmond, Washington, United States)
In this paper, we investigate postural reinforcement haptics for mid-air typing using squeeze actuation on the wrist. We propose and validate eye-tracking based objective metrics that capture the impact of haptics on the user's experience, which traditional performance metrics like speed and accuracy are not able to capture. To this end, we design four wrist-based haptic feedback conditions: no haptics, vibrations on keypress, squeeze+vibrations on keypress, and squeeze posture reinforcement + vibrations on keypress. We conduct a text input study with 48 participants to compare the four conditions on typing and gaze metrics. Our results show that for expert qwerty users, posture reinforcement haptics significantly benefit typing by reducing the visual attention on the keyboard by up to 44% relative to no haptics, thus enabling eyes-away behaviors.
2
Respecifying Phubbing: Video-Based Analysis of Smartphone Use in Co-Present Interactions
Iuliia Avgustis (University of Oulu, Oulu, Finland)
The concept of phubbing (generally defined as a practice of ignoring co-present others by focusing on one’s mobile device) is now widely used in studies aiming to understand the effects of smartphone use on co-present interactions. However, most of these studies are quantitative in nature and fail to grasp the interactional context of smartphone use. Drawing on video recordings and utilizing multimodal interaction analysis, the present study examines phubbing in naturally occurring interactions among young adults. Contrary to most previous research, the analysis reveals that disengagement often precedes self-initiated smartphone use rather than follows it. The study identifies factors that affect whether phubbing is reciprocated and whether it is oriented to as problematic. As a result of the analysis, an alternative conceptualization of phubbing is offered. By reflecting on participants’ ways of managing phubbing and its consequences, we discuss design solutions for supporting them in this task.
2
Evaluating Large Language Models in Generating Synthetic HCI Research Data: a Case Study
Perttu Hämäläinen (Aalto University, Espoo, Finland)Mikke Tavast (Aalto University, Espoo, Finland)Anton Kunnari (University of Helsinki, Helsinki, Finland)
Collecting data is one of the bottlenecks of Human-Computer Interaction (HCI) research. Motivated by this, we explore the potential of large language models (LLMs) in generating synthetic user research data. We use OpenAI’s GPT-3 model to generate open-ended questionnaire responses about experiencing video games as art, a topic not tractable with traditional computational user models. We test whether synthetic responses can be distinguished from real responses, analyze errors of synthetic data, and investigate content similarities between synthetic and real data. We conclude that GPT-3 can, in this context, yield believable accounts of HCI experiences. Given the low cost and high speed of LLM data generation, synthetic data should be useful in ideating and piloting new experiments, although any findings must obviously always be validated with real data. The results also raise concerns: if employed by malicious users of crowdsourcing services, LLMs may make crowdsourcing of self-report data fundamentally unreliable.
2
Exploring Memory-Oriented Interactions with Digital Photos In and Across Time: A Field Study of Chronoscope
Amy Yo Sue Chen (Simon Fraser University, Surrey, British Columbia, Canada)William Odom (Simon Fraser University, Surrey, British Columbia, Canada)Carman Neustaedter (Simon Fraser University, Surrey, British Columbia, Canada)Ce Zhong (Simon Fraser University, Surrey, British Columbia, Canada)Henry Lin (Simon Fraser University, Surrey, British Columbia, Canada)
We describe a field study of Chronoscope, a tangible photo viewer that lets people revisit and explore their digital photos with the support of temporal metadata. Chronoscope offers different temporal modalities for organizing one’s personal digital photo archive, and for exploring possible connections in and across time, and among photos and memories. We deployed four Chronoscopes in four households for three months to understand participants’ experiences over time. Our goals are to investigate the reflective potential of temporal modalities as an alternative design approach for supporting memory-oriented photo exploration, and empirically explore conceptual propositions related to slow technology. Findings revealed that Chronoscope catalyzed a range of reflective experiences on their respective life histories and life stories. It opened up alternative ways of considering time and the potential longevity of personal photo archives. We conclude with implications to present opportunities for future HCI research and practice.
2
HandAvatar: Embodying Non-Humanoid Virtual Avatars through Hands
Yu Jiang (Tsinghua University, Beijing, China)Zhipeng Li (Department of Computer Science and Technology, Tsinghua University, Beijing, China)Mufei He (Tsinghua University, Beijing, China)David Lindlbauer (Carnegie Mellon University, Pittsburgh, Pennsylvania, United States)Yukang Yan (CMU, Pittsburgh, Pennsylvania, United States)
We propose HandAvatar to enable users to embody non-humanoid avatars using their hands. HandAvatar leverages the high dexterity and coordination of users' hands to control virtual avatars, enabled through our novel approach for automatically-generated joint-to-joint mappings. We contribute an observation study to understand users’ preferences on hand-to-avatar mappings on eight avatars. Leveraging insights from the study, we present an automated approach that generates mappings between users' hands and arbitrary virtual avatars by jointly optimizing control precision, structural similarity, and comfort. We evaluated HandAvatar on static posing, dynamic animation, and creative exploration tasks. Results indicate that HandAvatar enables more precise control, requires less physical effort, and brings comparable embodiment compared to a state-of-the-art body-to-avatar control method. We demonstrate HandAvatar's potential with applications including non-humanoid avatar based social interaction in VR, 3D animation composition, and VR scene design with physical proxies. We believe that HandAvatar unlocks new interaction opportunities, especially for usage in Virtual Reality, by letting users become the avatar in applications including virtual social interaction, animation, gaming, or education.
2
Don’t Just Tell Me, Ask Me: AI Systems that Intelligently Frame Explanations as Questions Improve Human Logical Discernment Accuracy over Causal AI explanations
Valdemar Danry (MIT, CAMBRIDGE, Massachusetts, United States)Pat Pataranutaporn (MIT, Boston, Massachusetts, United States)Yaoli Mao (Columbia University, New York, New York, United States)Pattie Maes (MIT Media Lab, Cambridge, Massachusetts, United States)
Critical thinking is an essential human skill. Despite the importance of critical thinking, research reveals that our reasoning ability suffers from personal biases and cognitive resource limitations, leading to potentially dangerous outcomes. This paper presents the novel idea of AI-framed Questioning that turns information relevant to the AI classification into questions to actively engage users' thinking and scaffold their reasoning process. We conducted a study with 204 participants comparing the effects of AI-framed Questioning on a critical thinking task; discernment of logical validity of socially divisive statements. Our results show that compared to no feedback and even causal AI explanations of an always correct system, AI-framed Questioning significantly increase human discernment of logically flawed statements. Our experiment exemplifies a future style of Human-AI co-reasoning system, where the AI becomes a critical thinking stimulator rather than an information teller.