注目の論文一覧

各カテゴリ上位30論文までを表示しています

The ACM CHI Conference on Human Factors in Computing Systems (https://chi2023.acm.org/)

15
FakeForward: Using Deepfake Technology for Feedforward Learning
Christopher Clarke (University of Bath, Bath, United Kingdom)Jingnan Xu (University of Bath, Bath, United Kingdom)Ye Zhu (University of Bath, Bath, United Kingdom)Karan Dharamshi (University of Bath, Bath, United Kingdom)Harry McGill (University of Bath, Bath, United Kingdom)Stephen Black (University of Bath, Bath, United Kingdom)Christof Lutteroth (University of Bath, Bath, United Kingdom)
Videos are commonly used to support learning of new skills, to improve existing skills, and as a source of motivation for training. Video self-modelling (VSM) is a learning technique that improves performance and motivation by showing a user a video of themselves performing a skill at a level they have not yet achieved. Traditional VSM is very data and labour intensive: a lot of video footage needs to be collected and manually edited in order to create an effective self-modelling video. We address this by presenting FakeForward -- a method which uses deepfakes to create self-modelling videos from videos of other people. FakeForward turns videos of better-performing people into effective, personalised training tools by replacing their face with the user’s. We investigate how FakeForward can be effectively applied and demonstrate its efficacy in rapidly improving performance in physical exercises, as well as confidence and perceived competence in public speaking.
13
PunchPrint: Creating Composite Fiber-Filament Craft Artifacts by Integrating Punch Needle Embroidery and 3D Printing
Ashley Del Valle (University of California Santa Barbara, Santa Barbara, California, United States)Mert Toka (University of California Santa Barbara, Santa Barbara, California, United States)Alejandro Aponte (University of California Santa Barbara, Santa Barbara, California, United States)Jennifer Jacobs (University of California Santa Barbara, Santa Barbara, California, United States)
New printing strategies have enabled 3D-printed materials that imitate traditional textiles. These filament-based textiles are easy to fabricate but lack the look and feel of fiber textiles. We seek to augment 3D-printed textiles with needlecraft to produce composite materials that integrate the programmability of additive fabrication with the richness of traditional textile craft. We present PunchPrint: a technique for integrating fiber and filament in a textile by combining punch needle embroidery and 3D printing. Using a toolpath that imitates textile weave structure, we print a flexible fabric that provides a substrate for punch needle production. We evaluate our material’s robustness through tensile strength and needle compatibility tests. We integrate our technique into a parametric design tool and produce functional artifacts that show how PunchPrint broadens punch needle craft by reducing labor in small, detailed artifacts, enabling the integration of openings and multiple yarn weights, and scaffolding soft 3D structures.
8
Imprimer: Computational Notebooks for CNC Milling
Jasper Tran O'Leary (University of Washington, Seattle, Washington, United States)Gabrielle Benabdallah (University of Washington, Seattle, Washington, United States)Nadya Peek (University of Washington, Seattle, Washington, United States)
Digital fabrication in industrial contexts involves standardized procedures that prioritize precision and repeatability. However, fabrication machines are now available for practitioners who focus instead on experimentation. In this paper, we reframe hobbyist CNC milling as writing literate programs which interleave documentation, interactive graphics, and source code for machine control. To test this approach, we present Imprimer, a machine infrastructure for a CNC mill and an associated library for a computational notebook. Imprimer lets makers learn experimentally, prototype new interactions for making, and understand physical processes by writing and debugging code. We demonstrate three experimental milling workflows as computational notebooks, conduct a user study with practitioners with a range of backgrounds, and discuss literate programming as a future vision for digital fabrication altogether.
6
3D Printable Play-Dough: New Biodegradable Materials and Creative Possibilities for Digital Fabrication
Leah Buechley (University of New Mexico, Albuquerque, New Mexico, United States)Ruby Ta (University of New Mexico, Albuquerque, New Mexico, United States)
Play-dough is a brightly-colored, easy-to-make, and familiar material. We have developed and tested custom play-dough materials that can be employed in 3D printers designed for clay. This paper introduces a set of recipes for 3D printable play-dough along with an exploration of these materials' print characteristics. We explore the design potential of play-dough as a sustainable fabrication material, highlighting its recyclability, compostability, and repairability. We demonstrate how custom-color prints can be designed and constructed and describe how play-dough can be used as a support material for clay 3D prints. We also present a set of example artifacts made from play-dough and discuss opportunities for future research.
5
Augmenting Human Cognition with an AI-Mediated Intelligent Visual Feedback
Songlin Xu (University of California, San Diego, San Diego, California, United States)Xinyu Zhang (University of California San Diego, San Diego, California, United States)
In this paper, we introduce an AI-mediated framework that can provide intelligent feedback to augment human cognition. Specifically, we leverage deep reinforcement learning (DRL) to provide adaptive time pressure feedback to improve user performance in a math arithmetic task. Time pressure feedback could either improve or deteriorate user performance by regulating user attention and anxiety. Adaptive time pressure feedback controlled by a DRL policy according to users' real-time performance could potentially solve this trade-off problem. However, the DRL training and hyperparameter tuning may require large amounts of data and iterative user studies. Therefore, we propose a dual-DRL framework that trains a regulation DRL agent to regulate user performance by interacting with another simulation DRL agent that mimics user cognition behaviors from an existing dataset. Our user study demonstrates the feasibility and effectiveness of the dual-DRL framework in augmenting user performance, in comparison to the baseline group.
5
Understanding Moderators' Conflict and Conflict Management Strategies with Streamers in Live Streaming Communities
Jie Cai (Penn State University, University Park, Pennsylvania, United States)Donghee Yvette Wohn (New Jersey Institute of Technology, Newark , New Jersey, United States)
As each micro community centered around the streamer attempts to set its own guidelines in live streaming communities, it is common for volunteer moderators (mods) and the streamer to disagree on how to handle various situations. In this study, we conducted an online survey (N=240) with live streaming mods to explore their commitment to the streamer to grow the micro community and the different styles in which they handle conflicts with the streamer. We found that 1) mods apply more active and cooperative styles than passive and assertive styles to manage conflicts, but they might be forced to do so, and 2) mods with strong commitments to the streamer would like to apply styles showing either high concerns for the streamer or low concerns for themselves. We reflect on how these results can affect micro community development and recommend designs to mitigate conflict and strengthen commitment.
4
A Human-Computer Collaborative Editing Tool for Conceptual Diagrams
Lihang Pan (Tsinghua University, Beijing, China)Chun Yu (Tsinghua University, Beijing, China)Zhe He (Tsinghua University, Beijing, Beijing, China)Yuanchun Shi (Tsinghua University, Beijing, China)
Editing (e.g., editing conceptual diagrams) is a typical office task that requires numerous tedious GUI operations, resulting in poor interaction efficiency and user experience, especially on mobile devices. In this paper, we present a new type of human-computer collaborative editing tool (CET) that enables accurate and efficient editing with little interaction effort. CET divides the task into two parts, and the human and the computer focus on their respective specialties: the human describes high-level editing goals with multimodal commands, while the computer calculates, recommends, and performs detailed operations. We conducted a formative study (N = 16) to determine the concrete task division and implemented the tool on Android devices for the specific tasks of editing concept diagrams. The user study (N = 24 + 20) showed that it increased diagram editing speed by 32.75% compared with existing state-of-the-art commercial tools and led to better editing results and user experience.
4
Feel the Force, See the Force: Exploring Visual-tactile Associations of Deformable Surfaces with Colours and Shapes
Cameron Steer (University of Bath, Bath, United Kingdom)Teodora Dinca (University of Bath, Bath, United Kingdom)Crescent Jicol (University of Bath, Bath, United Kingdom)Michael J. Proulx (University of Bath, Bath, United Kingdom)Jason Alexander (University of Bath, Bath, United Kingdom)
Deformable interfaces provide unique interaction potential for force input, for example, when users physically push into a soft display surface. However, there remains limited understanding of which visual-tactile design elements signify the presence and stiffness of such deformable force-input components. In this paper, we explore how people correspond surface stiffness to colours, graphical shapes, and physical shapes. We conducted a cross-modal correspondence (CC) study, where 30 participants associated different surface stiffnesses with colours and shapes. Our findings evidence the CCs between stiffness levels for a subset of the 2D/3D shapes and colours used in the study. We distil our findings in three design recommendations: (1) lighter colours should be used to indicate soft surfaces, and darker colours should indicate stiff surfaces; (2) rounded shapes should be used to indicate soft surfaces, while less-curved shapes should be used to indicate stiffer surfaces, and; (3) longer 2D drop-shadows should be used to indicate softer surfaces, while shorter drop-shadows should be used to indicate stiffer surfaces.
4
CiteSee: Augmenting Citations in Scientific Papers with Persistent and Personalized Historical Context
Joseph Chee Chang (Allen Institute for AI, Seattle, Washington, United States)Amy X.. Zhang (University of Washington, Seattle, Washington, United States)Jonathan Bragg (Allen Institute for Artificial Intelligence, Seattle, Washington, United States)Andrew Head (University of Pennsylvania, Philadelphia, Pennsylvania, United States)Kyle Lo (Allen Institute for Artificial Intelligence, Seattle, Washington, United States)Doug Downey (Allen Institute for Artificial Intelligence, Seattle, Washington, United States)Daniel S. Weld (Allen Institute for Artificial Intelligence, Seattle, Washington, United States)
When reading a scholarly article, inline citations help researchers contextualize the current article and discover relevant prior work. However, it can be challenging to prioritize and make sense of the hundreds of citations encountered during literature reviews. This paper introduces CiteSee, a paper reading tool that leverages a user's publishing, reading, and saving activities to provide personalized visual augmentations and context around citations. First, CiteSee connects the current paper to familiar contexts by surfacing known citations a user had cited or opened. Second, CiteSee helps users prioritize their exploration by highlighting relevant but unknown citations based on saving and reading history. We conducted a lab study that suggests CiteSee is significantly more effective for paper discovery than three baselines. A field deployment study shows CiteSee helps participants keep track of their explorations and leads to better situational awareness and increased paper discovery via inline citation when conducting real-world literature reviews.
4
Co-Writing with Opinionated Language Models Affects Users' Views
Maurice Jakesch (Cornell University, Ithaca, New York, United States)Advait Bhat (Microsoft Research India, Bangalore, India)Daniel Buschek (University of Bayreuth, Bayreuth, Germany)Lior Zalmanson (Tel Aviv University, Tel Aviv, Tel Aviv District, Israel)Mor Naaman (Cornell Tech, New York, New York, United States)
If large language models like GPT-3 preferably produce a particular point of view, they may influence people's opinions on an unknown scale. This study investigates whether a language-model-powered writing assistant that generates some opinions more often than others impacts what users write -- and what they think. In an online experiment, we asked participants (N=1,506) to write a post discussing whether social media is good for society. Treatment group participants used a language-model-powered writing assistant configured to argue that social media is good or bad for society. Participants then completed a social media attitude survey, and independent judges (N=500) evaluated the opinions expressed in their writing. Using the opinionated language model affected the opinions expressed in participants' writing and shifted their opinions in the subsequent attitude survey. We discuss the wider implications of our results and argue that the opinions built into AI language technologies need to be monitored and engineered more carefully.
4
“Should I Follow the Human, or Follow the Robot?” — Robots in Power Can Have More Influence Than Humans on Decision-Making
Yoyo Tsung-Yu. Hou (Cornell University, Ithaca, New York, United States)Wen-Ying Lee (Cornell University, Ithaca, New York, United States)Malte F. Jung (Cornell University, Ithaca, New York, United States)
Artificially intelligent (AI) agents such as robots are increasingly delegated power in work settings, yet it remains unclear how power functions in interactions with both humans and robots, especially when they directly compete for influence. Here we present an experiment where every participant was matched with one human and one robot to perform decision-making tasks. By manipulating who has power, we created three conditions: human as leader, robot as leader, and a no-power-difference control. The results showed that the participants were significantly more influenced by the leader, regardless of whether the leader was a human or a robot. However, they generally held a more positive attitude toward the human than the robot, although they considered whichever was in power as more competent. This study illustrates the importance of power for future Human-Robot Interaction (HRI) and Human-AI Interaction (HAI) research, as it addresses pressing concerns of society about AI-powered intelligent agents.
3
What is Human-Centered about Human-Centered AI? A Map of the Research Landscape
Tara Capel (Queensland University of Technology, Brisbane, QLD, Australia)Margot Brereton (QUT, Brisbane, Brisbane, Australia)
The application of Artificial Intelligence (AI) across a wide range of domains comes with both high expectations of its benefits and dire predictions of misuse. While AI systems have largely been driven by a technology-centered design approach, the potential societal consequences of AI have mobilized both HCI and AI researchers towards researching human-centered artificial intelligence (HCAI). However, there remains considerable ambiguity about what it means to frame, design and evaluate HCAI. This paper presents a critical review of the large corpus of peer-reviewed literature emerging on HCAI in order to characterize what the community is defining as HCAI. Our review contributes an overview and map of HCAI research based on work that explicitly mentions the terms ‘human-centered artificial intelligence’ or ‘human-centered machine learning’ or their variations, and suggests future challenges and research directions. The map reveals the breadth of research happening in HCAI, established clusters and the emerging areas of Interaction with AI and Ethical AI. The paper contributes a new definition of HCAI, and calls for greater collaboration between AI and HCI research, and new HCAI constructs.
3
User-Driven Constraints for Layout Optimisation in Augmented Reality
Aziz Niyazov (IRIT - University of Toulouse, Toulouse, France)Barrett Ens (Monash University, Melbourne, Australia)Kadek Ananta Satriadi (Monash University, Melbourne, Australia)Nicolas Mellado (CNRS, Toulouse, France)Loic Barthe (IRIT - University of Toulouse, Toulouse, France)Tim Dwyer (Monash University, Melbourne, VIC, Australia)Marcos Serrano (IRIT - Elipse, Toulouse, France)
Automatic layout optimisation allows users to arrange augmented reality content in the real-world environment without the need for tedious manual interactions. This optimisation is often based on modelling the intended content placement as constraints, defined as cost functions. Then, applying a cost minimization algorithm leads to a desirable placement. However, such an approach is limited by the lack of user control over the optimisation results. In this paper we explore the concept of user-driven constraints for augmented reality layout optimisation. With our approach users can define and set up their own constraints directly within the real-world environment. We first present a design space composed of three dimensions: the constraints, the regions of interest and the constraint parameters. Then we explore which input gestures can be employed to define the user-driven constraints of our design space through a user elicitation study. Using the results of the study, we propose a holistic system design and implementation demonstrating our user-driven constraints, which we evaluate in a final user study where participants had to create several constraints at the same time to arrange a set of virtual contents.
3
The Intricacies of Social Robots: Secondary Analysis of Fictional Documentaries to Explore the Benefits and Challenges of Robots in Complex Social Settings
Judith Dörrenbächer (University of Siegen, Siegen, Germany)Ronda Ringfort-Felner (University of Siegen, Siegen, Germany)Marc Hassenzahl (University of Siegen, Siegen, Germany)
In the design of social robots, the focus is often on the robot itself rather than on the intricacies of possible application scenarios. In this paper, we examine eight fictional documentaries about social robots, such as SEYNO, a robot that promotes respect between passengers in trains, or PATO, a robot to watch movies with. Overall, robots were conceptualized either (1) to substitute humans in relationships or (2) to mediate relationships (human-human-robot-interaction). While the former is basis of many current approaches to social robotics, the latter is less common, but particularly interesting. For instance, the mediation perspective fundamentally impacts the role a robot takes (e.g., role model, black sheep, ally, opponent, moralizer) and thus its potential function and form. From the substitution perspective, robots are expected to mimic human emotions; from the mediation perspective, robots can be positive precisely because they remain objective and are neither emotional nor empathic.
3
Libraries of Things: Understanding the Challenges of Sharing Tangible Collections and the Opportunities for HCI
Lee Jones (Queen's University, Kingston, Ontario, Canada)Alaa Nousir (Queen's University, Kingston, Ontario, Canada)Tom Everrett (Ingenium - Canada's Museums of Science and Innovation, Ottawa, Ontario, Canada)Sara Nabil (Queen's University, Kingston, Ontario, Canada)
“Libraries of Things” are tangible collections of borrowable objects. There are many benefits to Libraries of Things such as making objects and skill-building accessible, reducing waste through the sharing of items, and saving costs associated with purchasing rarely-used items. We introduce the first HCI study of Library of Things by interviewing 23 librarians who run a variety of collections such as handheld tools, gear, and musical instruments – within public institutions and more grass-roots efforts in the private sector. In our findings, we discuss the challenges these collections experience in changing behavioural patterns from buying to borrowing, helping individuals `try new things', iterating to find sharable items, training staff, and manual intervention throughout the borrowing cycle. We present 5 opportunities for HCI research to support interactive skill-sharing, self-borrowing, maintenance recognition and cataloguing `things', organizing non-uniform inventories, and creating public-awareness. Further in-the-wild studies should also consider the tensions between the values of these organizations and low-cost convenient usage.
3
Short-Form Videos Degrade Our Capacity to Retain Intentions: Effect of Context Switching On Prospective Memory
Francesco Chiossi (LMU Munich, Munich, Germany)Luke Haliburton (LMU Munich, Munich, Germany)Changkun Ou (LMU Munich, Munich, Germany)Andreas Martin. Butz (LMU Munich, Munich, Germany)Albrecht Schmidt (LMU Munich, Munich, Germany)
Social media platforms use short, highly engaging videos to catch users' attention. While the short-form video feeds popularized by TikTok are rapidly spreading to other platforms, we do not yet understand their impact on cognitive functions. We conducted a between-subjects experiment (N=60) investigating the impact of engaging with TikTok, Twitter, and YouTube while performing a Prospective Memory task (i.e., executing a previously planned action). The study required participants to remember intentions over interruptions. We found that the TikTok condition significantly degraded the users’ performance in this task. As none of the other conditions (Twitter, YouTube, no activity) had a similar effect, our results indicate that the combination of short videos and rapid context-switching impairs intention recall and execution. We contribute a quantified understanding of the effect of social media feed format on Prospective Memory and outline consequences for media technology designers to not harm the users’ memory and wellbeing.
3
ChameleonControl: Teleoperating Real Human Surrogates through Mixed Reality Gestural Guidance for Remote Hands-on Classrooms
Mehrad Faridan (University of Calgary, Calgary, Alberta, Canada)Bheesha Kumari (University of Calgary, Calgary, Alberta, Canada)Ryo Suzuki (University of Calgary, Calgary, Alberta, Canada)
We present ChameleonControl, a real-human teleoperation system for scalable remote instruction in hands-on classrooms. In contrast to existing video or AR/VR-based remote hands-on education, ChameleonControl uses a real human as a surrogate of a remote instructor. Building on existing human-based telepresence approaches, we contribute a novel method to teleoperate a human surrogate through synchronized mixed reality hand gestural navigation and verbal communication. By overlaying the remote instructor's virtual hands in the local user's MR view, the remote instructor can guide and control the local user as if they were physically present. This allows the local user/surrogate to synchronize their hand movements and gestures with the remote instructor, effectively teleoperating a real human. We deploy and evaluate our system in classrooms of physiotherapy training, as well as other application domains such as mechanical assembly, sign language and cooking lessons. The study results confirm that our approach can increase engagement and the sense of co-presence, showing potential for the future of remote hands-on classrooms.
3
Subjective Probability Correction for Uncertainty Representations
Fumeng Yang (Northwestern University, Evanston, Illinois, United States)Maryam Hedayati (Northwestern University, Evanston, Illinois, United States)Matthew Kay (Northwestern University, Chicago, Illinois, United States)
We propose a new approach to uncertainty communication: we keep the uncertainty representation fixed, but adjust the distribution displayed to compensate for biases in people’s subjective probability in decision-making. To do so, we adopt a linear-in-probit model of subjective probability and derive two corrections to a Normal distribution based on the model’s intercept and slope: one correcting all right-tailed probabilities, and the other preserving the mode and one focal probability. We then conduct two experiments on U.S. demographically-representative samples. We show participants hypothetical U.S. Senate election forecasts as text or a histogram and elicit their subjective probabilities using a betting task. The first experiment estimates the linear-in-probit intercepts and slopes, and confirms the biases in participants’ subjective probabilities. The second, preregistered follow-up shows participants the bias-corrected forecast distributions. We find the corrections substantially improve participants’ decision quality by reducing the integrated absolute error of their subjective probabilities compared to the true probabilities. These corrections can be generalized to any univariate probability or confidence distribution, giving them broad applicability. Our preprint, code, data, and preregistration are available at https://doi.org/10.17605/osf.io/kcwxm.
3
The Walking Talking Stick: Understanding Automated Note-Taking in Walking Meetings
Luke Haliburton (LMU Munich, Munich, Germany)Natalia Bartłomiejczyk (Lodz University of Technology, Lodz, Poland)Albrecht Schmidt (LMU Munich, Munich, Germany)Paweł W. Woźniak (Chalmers University of Technology, Gothenburg, Sweden)Jasmin Niess (University of St. Gallen, St. Gallen, Switzerland)
While walking meetings offer a healthy alternative to sit-down meetings, they also pose practical challenges. Taking notes is difficult while walking, which limits the potential of walking meetings. To address this, we designed the Walking Talking Stick---a tangible device with integrated voice recording, transcription, and a physical highlighting button to facilitate note-taking during walking meetings. We investigated our system in a three-condition between-subjects user study with thirty pairs of participants (N=60) who conducted 15-minute outdoor walking meetings. Participants either used clip-on microphones, the prototype without the button, or the prototype with the highlighting button. We found that the tangible device increased task focus, and the physical highlighting button facilitated turn-taking and resulted in more useful notes. Our work demonstrates how interactive artifacts can incentivize users to hold meetings in motion and enhance conversation dynamics. We contribute insights for future systems which support conducting work tasks in mobile environments
3
Can Voice Assistants Be Microaggressors? Cross-Race Psychological Responses to Failures of Automatic Speech Recognition
Kimi Wenzel (Carnegie Mellon University, Pittsburgh, Pennsylvania, United States)Nitya Devireddy (Carnegie Mellon University, Pittsburgh, Pennsylvania, United States)Cam Davison (Carnegie Mellon University, Pittsburgh, Pennsylvania, United States)Geoff Kaufman (Carnegie Mellon University, Pittsburgh, Pennsylvania, United States)
Language technologies have a racial bias, committing greater errors for Black users than for white users. However, little work has evaluated what effect these disparate error rates have on users themselves. The present study aims to understand if speech recognition errors in human-computer interactions may mirror the same effects as misunderstandings in interpersonal cross-race communication. In a controlled experiment (N=108), we randomly assigned Black and white participants to interact with a voice assistant pre-programmed to exhibit a high versus low error rate. Results revealed that Black participants in the high error rate condition, compared to Black participants in the low error rate condition, exhibited significantly higher levels of self-consciousness, lower levels of self-esteem and positive affect, and less favorable ratings of the technology. White participants did not exhibit this disparate pattern. We discuss design implications and the diverse research directions to which this initial study aims to contribute.
3
UEyes: Understanding Visual Saliency across User Interface Types
Yue Jiang (Aalto University, Espoo, Finland)Luis A.. Leiva (University of Luxembourg, Esch-sur-Alzette, Luxembourg)Hamed Rezazadegan Tavakoli (Nokia Technologies, Espoo, Finland)Paul R. B. Houssel (University of Luxembourg, Esch-sur-Alzette, Luxembourg)Julia Kylmälä (Aalto University, Espoo, Finland)Antti Oulasvirta (Aalto University, Helsinki, Finland)
While user interfaces (UIs) display elements such as images and text in a grid-based layout, UI types differ significantly in the number of elements and how they are displayed. For example, webpage designs rely heavily on images and text, whereas desktop UIs tend to feature numerous small images. To examine how such differences affect the way users look at UIs, we collected and analyzed a large eye-tracking-based dataset, \textit{UEyes} (62 participants and 1,980 UI screenshots), covering four major UI types: webpage, desktop UI, mobile UI, and poster. We analyze its differences in biases related to such factors as color, location, and gaze direction. We also compare state-of-the-art predictive models and propose improvements for better capturing typical tendencies across UI types. Both the dataset and the models are publicly available.
3
Exploring Challenges and Opportunities to Support Designers in Learning to Co-create with AI-based Manufacturing Design Tools
Frederic Gmeiner (Carnegie Mellon University, Pittsburgh, Pennsylvania, United States)Humphrey Yang (Carnegie Mellon University, Pittsburgh, Pennsylvania, United States)Lining Yao (Carnegie Mellon University, Pittsburgh, Pennsylvania, United States)Kenneth Holstein (Carnegie Mellon University, Pittsburgh, Pennsylvania, United States)Nikolas Martelaro (Carnegie Mellon University, Pittsburgh, Pennsylvania, United States)
AI-based design tools are proliferating in professional software to assist engineering and industrial designers in complex manufacturing and design tasks. These tools take on more agentic roles than traditional computer-aided design tools and are often portrayed as “co-creators.” Yet, working effectively with such systems requires different skills than working with complex CAD tools alone. To date, we know little about how engineering designers learn to work with AI-based design tools. In this study, we observed trained designers as they learned to work with two AI-based tools on a realistic design task. We find that designers face many challenges in learning to effectively co-create with current systems, including challenges in understanding and adjusting AI outputs and in communicating their design goals. Based on our findings, we highlight several design opportunities to better support designer-AI co-creation.
3
Birds of a Feather Video-Flock Together: Design and Evaluation of an Agency-Based Parrot-to-Parrot Video-Calling System for Interspecies Ethical Enrichment
Rebecca Kleinberger (Northeastern University, Boston, Massachusetts, United States)Jennifer Cunha (Parrot Kindergarten, Jupiter, Florida, United States)Megha M. Vemuri (Massachusetts Institute of Technology, Cambridge, Massachusetts, United States)Ilyena Hirskyj-Douglas (The University of Glasgow, Glasgow, United Kingdom)
Over 20 million parrots are kept as pets in the US, often lacking appropriate stimuli to meet their high social, cognitive, and emotional needs. After reviewing bird perception and agency literature, we developed an approach to allow parrots to engage in video-calling other parrots. Following a pilot experiment and expert survey, we ran a three-month study with 18 pet birds to evaluate the potential value and usability of a parrot-parrot video-calling system. We assessed the system in terms of perception, agency, engagement, and overall perceived benefits. With 147 bird-triggered calls, our results show that 1) every bird used the system, 2) most birds exhibited high motivation and intentionality, and 3) all caretakers reported perceived benefits, some arguably life-transformative, such as learning to forage or even to fly by watching others. We report on individual insights and propose considerations regarding ethics and the potential of parrot video-calling for enrichment.
3
Bias-Aware Systems: Exploring Indicators for the Occurrences of Cognitive Biases when Facing Different Opinions
Nattapat Boonprakong (University of Melbourne, Parkville, Victoria, Australia)Xiuge Chen (The University of Melbourne, Melbourne, Victoria, Australia)Catherine Davey (University of Melbourne, Parkville, Victoria, Australia)Benjamin Tag (University of Melbourne, Melbourne, Victoria, Australia)Tilman Dingler (University of Melbourne, Melbourne, Victoria, Australia)
Cognitive biases have been shown to play a critical role in creating echo chambers and spreading misinformation. They undermine our ability to evaluate information and can influence our behaviour without our awareness. To allow the study of occurrences and effects of biases on information consumption behaviour, we explore indicators for cognitive biases in physiological and interaction data. Therefore, we conducted two experiments investigating how people experience statements that are congruent or divergent from their own ideological stance. We collected interaction data, eye tracking data, hemodynamic responses, and electrodermal activity while participants were exposed to ideologically tainted statements. Our results indicate that people spend more time processing statements that are incongruent with their own opinion. We detected differences in blood oxygenation levels between congruent and divergent opinions, a first step towards building systems to detect and quantify cognitive biases.
3
GestureExplorer: Immersive Visualisation and Exploration of Gesture Data
Ang Li (Monash University , Melbounre, Australia)Jiazhou Liu (Monash University, Melbourne, VIC, Australia)Maxime Cordeil (The University Of Queensland, Brisbane, Australia)Jack Topliss (University of Canterbury, Christchurch, Canterbury, New Zealand)Thammathip Piumsomboon (University of Canterbury, Christchurch, Canterbury, New Zealand)Barrett Ens (Monash University, Melbourne, Australia)
This paper presents the design and evaluation of GestureExplorer, an Immersive Analytics tool that supports the interactive exploration, classification and sensemaking with large sets of 3D temporal gesture data. GestureExplorer features 3D skeletal and trajectory visualisations of gestures combined with abstract visualisations of clustered sets of gestures. By leveraging the large immersive space afforded by a Virtual Reality interface our tool allows free navigation and control of viewing perspective for users to gain a better understanding of gestures. We explored a selection of classification methods to provide an overview of the dataset that was linked to a detailed view of the data that showed different visualisation modalities. We evaluated GestureExplorer with two user studies and collected feedback from participants with diverse visualisation and analytics backgrounds. Our results demonstrated the promising capability of GestureExplorer for providing a useful and engaging experience in exploring and analysing gesture data.
3
“I Won't Go Speechless”: Design Exploration on a Real-Time Text-To-Speech Speaking Tool for Videoconferencing
Wooseok Kim (KAIST, Daejeon, Korea, Republic of)Jian Jun (KAIST, Daejeon, Korea, Republic of)Minha Lee (KAIST, Daejeon, Korea, Republic of)Sangsu Lee (KAIST, Daejeon, Korea, Republic of)
The COVID-19 pandemic has shifted many business activities to non-face-to-face activities, and videoconferencing has become a new paradigm. However, conference spaces isolated from surrounding interferences are not always readily available. People frequently participate in public places with unexpected crowds or acquaintances, such as cafés, living rooms, and shared offices. These environments have surrounding limitations that potentially cause challenges in speaking up during videoconferencing. To alleviate these issues and support the users in speaking-restrained spatial contexts, we propose a text-to-speech (TTS) speaking tool as a new speaking method to support active videoconferencing participation. We derived the possibility of a TTS speaking tool and investigated the empirical challenges and user expectations of a TTS speaking tool using a technology probe and participatory design methodology. Based on our findings, we discuss the need for a TTS speaking tool and suggest design considerations for its application in videoconferencing.
3
The Tactile Dimension: A Method for Physicalizing Touch Behaviors
Laura J. Perovich (Northeastern University, Boston, Massachusetts, United States)Bernice Rogowitz (Visual Perspectives Research , Westchester, New York, United States)Victoria Crabb (Northeastern University, Boston, Massachusetts, United States)Jack Vogelsang (Northeastern University, Boston, Massachusetts, United States)Sara Hartleben (Northeastern University, Boston, Massachusetts, United States)Dietmar Offenhuber (Northeastern University, Boston, Massachusetts, United States)
Traces of touch provide valuable insight into how we interact with the physical world. Measuring touch behavior, however, is expensive and imprecise. Utilizing a fluorescent UV tracer powder, we developed a low-cost analog method to capture persistent, high-contrast touch records on arbitrary objects. We describe our process for selecting a tracer, methods for capturing, enhancing, and aggregating traces, and approaches to examining qualitative aspects of the user experience. Three user studies demonstrate key features of this method. First, we show that it provides clear and durable traces on objects representative of scientific visualization, physicalization, and product design. Second, we demonstrate how this method could be used to study touch perception, by measuring how task and narrative framing elicit different touch behaviors on the same object. Third, we demonstrate how this method can be used to evaluate data physicalizations by observing how participants touch two different physicalizations of COVID-19 time-series data.
3
Smartphone-derived Virtual Keyboard Dynamics Coupled with Accelerometer Data as a Window into Understanding Brain Health
Emma Ning (University of Illinois at Chicago, Chicago, Illinois, United States)Andrea T. Cladek (University of Illinois at Chicago, Chicago, Illinois, United States)Mindy K. Ross (University of Illinois at Chicago, Chicago, Illinois, United States)Sarah Kabir (University of Illinois at Chicago, Chicago, Illinois, United States)Amruta Barve (University of Illinois at Chicago, Chicago, Illinois, United States)Ellyn Kennelly (Wayne State University, Detroit, Michigan, United States)Faraz Hussain (University of Illinois at Chicago, Chicago, Illinois, United States)Jennifer Duffecy (University of Illinois at Chicago, Chicago, Illinois, United States)Scott Langenecker (University of Utah, Salt Lake City, Utah, United States)Theresa Nguyen (University of Illinois at Chicago, Chicago, Illinois, United States)Theja Tulabandhula (University of Illinois at Chicago, Chicago, Illinois, United States)John Zulueta (University of Illinois at Chicago, Chicago, Illinois, United States)Olusola A. Ajilore (University of Illinois, Chicago (UIC), Chicago, Illinois, United States)Alexander P. Demos (University of Illinois at Chicago, Chicago, Illinois, United States)Alex Leow (University of Illinois, Chicago (UIC), Chicago, Illinois, United States)
We examine the feasibility of using accelerometer data exclusively collected during typing on a custom smartphone keyboard to study whether typing dynamics are associated with daily variations in mood and cognition. As part of an ongoing digital mental health study involving mood disorders, we collected data from a well-characterized clinical sample (N = 85) and classified accelerometer data per typing session into orientation (upright vs. not) and motion (active vs. not). The mood disorder group showed lower cognitive performance despite mild symptoms (depression/mania). There were also diurnal pattern differences with respect to cognitive performance: individuals with higher cognitive performance typed faster and were less sensitive to time of day. They also exhibited more well-defined diurnal patterns in smartphone keyboard usage: they engaged with the keyboard more during the day and tapered their usage more at night compared to those with lower cognitive performance, suggesting a healthier usage of their phone.
3
UndoPort: Exploring the Influence of Undo-Actions for Locomotion in Virtual Reality on the Efficiency, Spatial Understanding and User Experience
Florian Müller (LMU Munich, Munich, Germany)Arantxa Ye (LMU Munich, Munich, Germany)Dominik Schön (TU Darmstadt, Darmstadt, Germany)Julian Rasch (LMU Munich, Munich, Germany)
When we get lost in Virtual Reality (VR) or want to return to a previous location, we use the same methods of locomotion for the way back as for the way forward. This is time-consuming and requires additional physical orientation changes, increasing the risk of getting tangled in the headsets' cables. In this paper, we propose the use of undo actions to revert locomotion steps in VR. We explore eight different variations of undo actions as extensions of point\&teleport, based on the possibility to undo position and orientation changes together with two different visualizations of the undo step (discrete and continuous). We contribute the results of a controlled experiment with 24 participants investigating the efficiency and orientation of the undo techniques in a radial maze task. We found that the combination of position and orientation undo together with a discrete visualization resulted in the highest efficiency without increasing orientation errors.
2
Characteristics of Deep and Skim Reading on Smartphones vs. Desktop: A Comparative Study
Xiuge Chen (The University of Melbourne, Melbourne, Victoria, Australia)Namrata Srivastava (Monash University, Melbourne, Victoria, Australia)Rajiv Jain (Adobe Research, College Park, Maryland, United States)Jennifer Healey (Adobe Research, San Jose, California, United States)Tilman Dingler (University of Melbourne, Melbourne, Victoria, Australia)
Deep reading fosters text comprehension, memory, and critical thinking. The growing prevalance of digital reading on mobile interfaces raises concerns that deep reading is being replaced by skimming and sifting through information, but this is currently unmeasured. Traditionally, reading quality is assessed using comprehension tests, which require readers to explicitly answer a set of carefully composed questions. To quantify and understand reading behaviour in natural settings and at scale, however, implicit measures are needed of deep versus skim reading across desktop and mobile devices, the most prominent digital reading platforms. In this paper, we present an approach to systematically induce deep and skim reading and subsequently train classifiers to discriminate these two reading styles based on eye movement patterns and interaction data. Based on a user study with 29 participants, we created models that detect deep reading on both devices with up to 0.82 AUC. We present the characteristics of deep reading and discuss how our models can be used to measure the effect of reading UI design and monitor long-term changes in reading behaviours.
2
MR.Brick: Designing A Mixed-reality Educational Game System for Promoting Children's Remote Social & Collaborative Skill
Yudan Wu (Tsinghua University, Beijing, China)Shanhe You (Tsinghua University, Beijing, China)Zixuan Guo (Tsinghua University, Beijing, China)Xiangyang Li (Tsinghua University, Beijing, China)Guyue Zhou (Tsinghua University, Beijing, China)Jiangtao Gong (Tsinghua University, Beijing, China)
Children are one of the groups most influenced by COVID-19-related social distancing, and a lack of contact with peers can limit their opportunities to develop social and collaborative skills. However, remote socialization and collaboration as an alternative approach is still a great challenge for children. This paper presents MR.Brick, a Mixed Reality (MR) educational game system that helps children adapt to remote collaboration. A controlled experimental study involving 24 children aged six to ten was conducted to compare MR.Brick with the traditional video game by measuring their social and collaborative skills and analyzing their multi-modal playing behaviours. The results showed that MR.Brick was more conducive to children's remote collaboration experience than the traditional video game. Given the lack of training systems designed for children to collaborate remotely, this study may inspire interaction design and educational research in related fields.
2
IntimaSea: Exploring Shared Stress Display in Close Relationships
Yanqi Jiang (Fudan University, Shanghai, Shanghai, China)Xianghua(Sharon) Ding (University of Glasgow, Glasgow, Lanarkshire, United Kingdom)Xiaojuan Ma (Hong Kong University of Science and Technology, Hong Kong, Hong Kong)Zhida Sun (Human-Computer Interaction Initiative, Hong Kong, China)Ning Gu (Fudan University, Shanghai, Shanghai, China)
Automatic stress tracking has become increasingly available on wearable devices. Research has investigated its use for individual stress management, largely within the traditional data-as-care framing. However, its use for stress sharing in social relationships, particularly close relationships, is still under explored. Inspired by the idea of "caring-through-data", which focuses on mediating the social and emotional experiences of the collective "us" with data, this paper presents a design study with a prototype called IntimaSea, a display featuring illustrative stress data in collective forms to be shared among close relationships. The field trials with nine groups of intimately-connected users (N=19) highlight its potential on stress awareness, interpretation and management, as well as intimacy promotion. We end by discussing sharing stress for social ways of stress management, stress data as a meaningful social cue mediating relationships, as well as design implications for caring-through-data.
2
Visual Captions: Augmenting Verbal Communication with On-the-fly Visuals
Xingyu "Bruce". Liu (UCLA, Los Angeles, California, United States)Vladimir Kirilyuk (Google, Mountain View, California, United States)Xiuxiu Yuan (Google, Mountain View, California, United States)Alex Olwal (Google Inc., Mountain View, California, United States)Peggy Chi (Google Research, Mountain View, California, United States)Xiang 'Anthony' Chen (UCLA, Los Angeles, California, United States)Ruofei Du (Google, San Francisco, California, United States)
Video conferencing solutions like Zoom, Google Meet, and Microsoft Teams are becoming increasingly popular for facilitating conversations, and recent advancements such as live captioning help people better understand each other. We believe that the addition of visuals based on the context of conversations could further improve comprehension of complex or unfamiliar concepts. To explore the potential of such capabilities, we conducted a formative study through remote interviews (N=10) and crowdsourced a dataset of over 1500 sentence-visual pairs across a wide range of contexts. These insights informed Visual Captions, a real-time system that integrates with a videoconferencing platform to enrich verbal communication. Visual Captions leverages a fine-tuned large language model to proactively suggest relevant visuals in open-vocabulary conversations. We present the findings from a lab study (N=26) and an in-the-wild case study (N=10), demonstrating how Visual Captions can help improve communication through visual augmentation in various scenarios.
2
Memory Manipulations in Extended Reality
Elise Bonnail (Institut Polytechnique de Paris, Paris, France)Wen-Jie Tseng (Institut Polytechnique de Paris, Paris, France)Mark McGill (University of Glasgow, Glasgow, Lanarkshire, United Kingdom)Eric Lecolinet (Institut Polytechnique de Paris, Paris, France)Samuel Huron (Télécom Paris, Institut Polytechnique de Paris, Palaiseau, ile de France, France)Jan Gugenheimer (TU-Darmstadt, Darmstadt, Germany)
Human memory has notable limitations (e.g., forgetting) which have necessitated a variety of memory aids (e.g., calendars). As we grow closer to mass adoption of everyday Extended Reality (XR), which is frequently leveraging perceptual limitations (e.g., redirected walking), it becomes pertinent to consider how XR could leverage memory limitations (forgetting, distorting, persistence) to induce memory manipulations. As memories highly impact our self-perception, social interactions, and behaviors, there is a pressing need to understand XR Memory Manipulations (XRMMs). We ran three speculative design workshops (n=12), with XR and memory researchers creating 48 XRMM scenarios. Through thematic analysis, we define XRMMs, present a framework of their core components and reveal three classes (at encoding, pre-retrieval, at retrieval). Each class differs in terms of technology (AR, VR) and impact on memory (influencing quality of memories, inducing forgetting, distorting memories). We raise ethical concerns and discuss opportunities of perceptual and memory manipulations in XR.
2
Imagine That! Imaginative Suggestibility Affects Presence in Virtual Reality
Crescent Jicol (University of Bath, Bath, United Kingdom)Christopher Clarke (University of Bath, Bath, United Kingdom)Emilia Tor (University of Bath , Bath, United Kingdom)Hiu Lam Yip (University of Bath, Bath, United Kingdom)Jinha Yoon (University of Bath, Bath, Somerset, United Kingdom)Chris Bevan (University of Bristol, Bristol, United Kingdom)Hugh Bowden (King's College London, London, United Kingdom)Elisa Brann (King's College London, London, United Kingdom)Kirsten Cater (University of Bristol, Bristol, United Kingdom)Richard Cole (University of Bristol, Bristol, United Kingdom)Quinton Deeley (King's College London, London, United Kingdom)Esther Eidinow (University of Bristol , Bristol, United Kingdom)Eamonn O'Neill (University of Bath, Bath, United Kingdom)Christof Lutteroth (University of Bath, Bath, United Kingdom)Michael J. Proulx (University of Bath, Bath, United Kingdom)
Personality characteristics can affect how much presence an individual experiences in virtual reality, and researchers have explored how it may be possible to prime users to increase their sense of presence. A personality characteristic that has yet to be explored in the VR literature is imaginative suggestibility, the ability of an individual to successfully experience an imaginary scenario as if it were real. In this paper, we explore how suggestibility and priming affect presence when consulting an ancient oracle in VR as part of an educational experience -- a common VR application. We show for the first time how imaginative suggestibility is a major factor which affects presence and emotions experienced in VR, while priming cues have no effect on participants' (n=128) user experience, contrasting results from prior work. We consider the impacts of these findings for VR design and provide guidelines based on our results.
2
Nooks: Social Spaces to Lower Hesitations in Interacting with New People at Work
Shreya Bali (Carnegie Mellon University, Pittsburgh, Pennsylvania, United States)Pranav Khadpe (Carnegie Mellon University, Pittsburgh, Pennsylvania, United States)Geoff Kaufman (Carnegie Mellon University, Pittsburgh, Pennsylvania, United States)Chinmay Kulkarni (Carnegie Mellon University, Pittsburgh, Pennsylvania, United States)
Initiating conversations with new people at work is often intimidating because of uncertainty about their interests. People worry others may reject their attempts to initiate conversation or that others may not enjoy the conversation. We introduce a new system, Nooks, built on Slack, that reduces fear of social evaluation by enabling individuals to initiate any conversation as a nook—a conversation room that identifies its topic, but not its creator. Automatically convening others interested in the nook, Nooks further reduces fears of social evaluation by guaranteeing individuals in advance that others they are about to interact with are interested in the conversation. In a multi-month deployment with participants in a summer research program, Nooks provided participants with non-threatening and inclusive interaction opportunities, and ambient awareness, leading to new interactions online and offline. Our results demonstrate how intentionally designed social spaces can reduce fears of social evaluation and catalyze new workplace connections.
2
DAPIE: Interactive Step-by-Step Explanatory Dialogues to Answer Children’s Why and How Questions
Yoonjoo Lee (KAIST, Daejeon, Korea, Republic of)Tae Soo Kim (KAIST, Daejeon, Korea, Republic of)Sungdong Kim (NAVER AI Lab, Seongnam, Korea, Republic of)Yohan Yun (KAIST, Suwon, Gyeonggi, Korea, Republic of)Juho Kim (KAIST, Daejeon, Korea, Republic of)
Children acquire an understanding of the world by asking "why'' and "how'' questions. Conversational agents (CAs) like smart speakers or voice assistants can be promising respondents to children's questions as they are more readily available than parents or teachers. However, CAs' answers to "why'' and "how'' questions are not designed for children, as they can be difficult to understand and provide little interactivity to engage the child. In this work, we propose design guidelines for creating interactive dialogues that promote children's engagement and help them understand explanations. Applying these guidelines, we propose DAPIE, a system that answers children's questions through interactive dialogue by employing an AI-based pipeline that automatically transforms existing long-form answers from online sources into such dialogues. A user study (N=16) showed that, with DAPIE, children performed better in an immediate understanding assessment while also reporting higher enjoyment than when explanations were presented sentence-by-sentence.
2
Contestable Camera Cars: A Speculative Design Exploration of Public AI That Is Open and Responsive to Dispute
Kars Alfrink (Delft University of Technology, Delft, Netherlands)Ianus Keller (Delft University of Technology, Delft, NB, Netherlands)Neelke Doorn (Delft University of Technology, Delft, Netherlands)Gerd Kortuem (Delft University of Technology, Delft, Netherlands)
Local governments increasingly use artificial intelligence (AI) for automated decision-making. Contestability, making systems responsive to dispute, is a way to ensure they respect human rights to autonomy and dignity. We investigate the design of public urban AI systems for contestability through the example of camera cars: human-driven vehicles equipped with image sensors. Applying a provisional framework for contestable AI, we use speculative design to create a concept video of a contestable camera car. Using this concept video, we then conduct semi-structured interviews with 17 civil servants who work with AI employed by a large northwestern European city. The resulting data is analyzed using reflexive thematic analysis to identify the main challenges facing the implementation of contestability in public AI. We describe how civic participation faces issues of representation, public AI systems should integrate with existing democratic practices, and cities must expand capacities for responsible AI development and operation.
2
The Effects of Avatar and Environment on Thermal Perception and Skin Temperature in Virtual Reality
Martin Kocur (University of Regensburg, Regensburg, Germany)Lukas Jackermeier (University of Regensburg, Regensburg, Germany)Valentin Schwind (Frankfurt University of Applied Sciences, Frankfurt, Germany)Niels Henze (University of Regensburg, Regensburg, Germany)
Humans' thermal regulation and subjective perception of temperature is highly plastic and depends on the visual appearance of the surrounding environment. Previous work shows that an environment’s color temperature affects the experienced temperature. As virtual reality (VR) enables visual immersion, recent work suggests that a VR scene's color temperature also affects experienced temperature. It is, however, unclear if an avatar’s appearance also affects users’ thermal perception and if a change in thermal perception even influences the body temperature. Therefore, we conducted a study with 32 participants performing a task in an ice or fire world while having ice or fire hands. We show that being in a fire world or having fire hands increases the perceived temperature. We even show that having fire hands decreases the hand temperature compared to having ice hands. We discuss the implications for the design of VR systems and future research directions.
2
Supporting Piggybacked Co-Located Leisure Activities via Augmented Reality
Samantha Reig (Carnegie Mellon University, Pittsburgh, Pennsylvania, United States)Erica Principe Cruz (Carnegie Mellon University, Pittsburgh, Pennsylvania, United States)Melissa Powers (New York University, New York, New York, United States)Jennifer He (Stanford University, Stanford, California, United States)Timothy Chong (University of Washington, Seattle, Washington, United States)Yu Jiang Tham (Snap Inc., Seattle, Washington, United States)Sven Kratz (Snap, Inc., Seattle, Washington, United States)Ava Robinson (Northwestern University, Evanston, Illinois, United States)Brian A.. Smith (Columbia University, New York, New York, United States)Rajan Vaish (Snap Inc., Santa Monica, California, United States)Andrés Monroy-Hernández (Princeton University, Princeton, New Jersey, United States)
Technology, especially the smartphone, is villainized for taking meaning and time away from in-person interactions and secluding people into "digital bubbles''. We believe this is not an intrinsic property of digital gadgets, but evidence of a lack of imagination in technology design. Leveraging augmented reality (AR) toward this end allows us to create experiences for multiple people, their pets, and their environments. In this work, we explore the design of AR technology that "piggybacks'' on everyday leisure to foster co-located interactions among close ties (with other people and pets). We designed, developed, and deployed three such AR applications, and evaluated them through a 41-participant and 19-pet user study. We gained key insights about the ability of AR to spur and enrich interaction in new channels, the importance of customization, and the challenges of designing for the physical aspects of AR devices (e.g., holding smartphones). These insights guide design implications for the novel research space of co-located AR.
2
What does it mean to cycle in Virtual Reality? Exploring Cycling Fidelity and Control of VR Bicycle Simulators
Andrii Matviienko (Technical University of Darmstadt, Darmstadt, Germany)Hajris Hoxha (Technical University of Darmstadt, Darmstadt, Germany)Max Mühlhäuser (TU Darmstadt, Darmstadt, Germany)
Creating highly realistic Virtual Reality (VR) bicycle experiences can be time-consuming and expensive. Moreover, it is unclear what hardware parts are necessary to design a bicycle simulator and whether a bicycle is needed at all. In this paper, we investigated cycling fidelity and control of VR bicycle simulators. For this, we developed and evaluated three cycling simulators: (1) cycling without a bicycle (bikeless), (2) cycling on a fixed (stationary) and (3) moving bicycle (tandem) with four levels of control (no control, steering, pedaling, and steering + pedaling). To evaluate all combinations of fidelity and control, we conducted a controlled experiment (N = 24) in indoor and outdoor settings. We found that the bikeless setup provides the highest feeling of safety, while the tandem leads to the highest realism without increasing motion sickness. Moreover, we discovered that bicycles are not essential for cycling in VR.
2
ReMotion: Supporting Remote Collaboration in Open Space with Automatic Robotic Embodiment
Mose Sakashita (Cornell University, Ithaca, New York, United States)Ruidong Zhang (Cornell University, Ithaca, New York, United States)Xiaoyi Li (Cornell University , Ithaca, New York, United States)Hyunju Kim (Cornell University, Ithaca, New York, United States)Michael Russo (Cornell University, Ithaca, New York, United States)Cheng Zhang (Cornell University, ITHACA, New York, United States)Malte F. Jung (Cornell University, Ithaca, New York, United States)Francois Guimbretiere (Cornell University, Ithaca, New York, United States)
Design activities, such as brainstorming or critique, often take place in open spaces combining whiteboards and tables to present artefacts. In co-located settings, peripheral awareness enables participants to understand each other’s locus of attention with ease. However, these spatial cues are mostly lost while using videoconferencing tools. Telepresence robots could bring back a sense of presence, but controlling them is distracting. To address this problem, we present ReMotion, a fully automatic robotic proxy designed to explore a new way of supporting non-collocated open-space design activities. ReMotion combines a commodity body tracker (Kinect) to capture a user’s location and orientation over a wide area with a minimally invasive wearable system (NeckFace) to capture facial expressions. Due to its omnidirectional platform, ReMotion embodiment can render a wide range of body movements. A formative evaluation indicated that our system enhances the sharing of attention and the sense of co-presence enabling seamless movement-in-space during a design review task.
2
Take My Hand: Automated Hand-Based Spatial Guidance for the Visually Impaired
Adil Rahman (University of Virginia, Charlottesville, Virginia, United States)Md Aashikur Rahman Azim (University of Virginia, Charlottesville, Virginia, United States)Seongkook Heo (University of Virginia, Charlottesville, Virginia, United States)
Tasks that involve locating objects and then moving hands to those specific locations, such as using touchscreens or grabbing objects on a desk, are challenging for the visually impaired. Over the years, audio guidance and haptic feedback have been a staple in hand navigation based assistive technologies. However, these methods require the user to interpret the generated directional cues and then manually perform the hand motions. In this paper, we present automated hand-based spatial guidance to bridge the gap between guidance and execution, allowing visually impaired users to move their hands between two points automatically, without any manual effort. We implement this concept through FingerRover, an on-finger miniature robot that carries the user's finger to target points. We demonstrate the potential applications that can benefit from automated hand-based spatial guidance. Our user study shows the potential of our technique in improving the interaction capabilities of people with visual impairments.
2
Facilitating Experiential Training for Counselors using a Real-time Annotation Tool
Tianying Chen (Carnegie Mellon University, Pittsburgh, Pennsylvania, United States)Michael Xieyang Liu (Carnegie Mellon University, Pittsburgh, Pennsylvania, United States)Emily Ding (Carnegie Mellon University, Pittsburgh, Pennsylvania, United States)Emma O'Neil (University of Pennsylvania, Philadelphia, Pennsylvania, United States)Mansi Agarwal (Carnegie Mellon University, Pittsburgh, Pennsylvania, United States)Robert E. Kraut (Carnegie Mellon University, Pittsburgh, Pennsylvania, United States)Laura Dabbish (Carnegie Mellon University, Pittsburgh, Pennsylvania, United States)
Experiential training, where mental health professionals practice their learned skills, remains the most costly component of therapeutic training. We introduce Pin-MI, a video-call-based tool that supports experiential learning of counseling skills used in motivational interviewing (MI) through interactive role-play as client and counselor. In Pin-MI, counselors annotate, or "pin" the important moments in their role-play sessions in real-time. The pins are then used post-session to facilitate a reflective learning process, in which both client and counselor can provide feedback about what went well or poorly during each pinned moment. We discuss the design of Pin-MI and a qualitative evaluation with a set of healthcare professionals learning MI. Our evaluation suggests that Pin-MI helped users develop empathy, be more aware of their skill usage, guaranteed immediate and targeted feedback, and helped users correct misconceptions about their performance. We discuss implications for the design of experiential training tools for learning counseling skills.
2
ConceptEVA: Concept-Based Interactive Exploration and Customization of Document Summaries
Xiaoyu Zhang (University of California, Davis, Davis, California, United States)Jianping Li (University of California, Davis, Davis, California, United States)Po-Wei Chi (None, pwchi@ucdavis.edu, California, United States)Senthil Chandrasegaran (Delft University of Technology, Delft, Netherlands)Kwan-Liu Ma (University of California at Davis, Davis, California, United States)
With the most advanced natural language processing and artificial intelligence approaches, effective summarization of long and multi-topic documents---such as academic papers---for readers from different domains still remains a challenge. To address this, we introduce ConceptEVA, a mixed-initiative approach to generate, evaluate, and customize summaries for long and multi-topic documents. ConceptEVA incorporates a custom multi-task longformer encoder decoder to summarize longer documents. Interactive visualizations of document concepts as a network reflecting both semantic relatedness and co-occurrence help users focus on concepts of interest. The user can select these concepts and automatically update the summary to emphasize them. We present two iterations of ConceptEVA evaluated through an expert review and a within-subjects study. We find that participants' satisfaction with customized summaries through ConceptEVA is higher than their own manually-generated summary, while incorporating critique into the summaries proved challenging. Based on our findings, we make recommendations for designing summarization systems incorporating mixed-initiative interactions.
2
Love on the spectrum: Toward Inclusive online dating experience of autistic individuals
Dasom Choi (KAIST, Dajeon, Korea, Republic of)Sung-In Kim (Seoul Dasiseogi Homless Support Center , Seoul, Korea, Republic of)Sunok Lee (KAIST, Daejeon, Korea, Republic of)Hyunseung Lim (KAIST, Daejeon, Korea, Republic of)Hee Jeong Yoo (Seoul National University Bundang Hospital, Seongnam, Korea, Republic of)Hwajung Hong (KAIST, Deajeon, Korea, Republic of)
Online dating is a space where autistic individuals can find romantic partners with reduced social demands. Autistic individuals are often expected to adapt their behaviors to the social norms underlying the online dating platform to appear as desirable romantic partners. However, given that their autistic traits can lead them to different expectations of dating, it is uncertain whether conforming their behaviors to the norm will guide them to the person they truly want. In this paper, we explored the perceptions and expectations of autistic adults in online dating through interviews and workshops. We found that autistic people desired to know whether they behaved according to the platform's norms. Still, they expected to keep their unique characteristics rather than unconditionally conform to the norm. We conclude by providing suggestions for designing inclusive online dating experiences that could foster self-guided decisions of autistic users and embrace their unique characteristics.
2
Layout Generation for Various Scenarios in Mobile Shopping Apps
Qianzhi Jing (College of Computer Science and Technology, HangZhou, China)TingTing Zhou (Alibaba Group, Hangzhou, Zhejiang, China)Yixin Zeng (Zhejiang University, Hangzhou, China)Liuqing Chen (Zhejiang University, Hangzhou, Zhejiang, China)Lingyun Sun (Zhejiang University, Hangzhou, China)Yankun Zhen (Alibaba Group, Hangzhou, Zhejiang, China)Yichun Du (Alibaba Group, Hangzhou, China)
Layout is essential for the product listing pages (PLPs) in mobile shopping applications. To clearly convey the information that consumers require and to achieve specific functions, PLPs layouts often have many variations driven by scenarios. In this work, we study the PLPs layout design for different scenarios and propose a design space to guide the large-scale creation of PLPs. We propose LayoutVQ-VAE, a novel model specialized in generating layouts with internal and external constraints. LayoutVQ-VAE differs from previous methods as it learns a discrete latent representation of layout and can model the relationship between layout representation and scenarios without applying heuristics. Experiments on publicly available benchmarks for different layout types validate that our method performs comparably or favorably against the state-of-the-art methods. Case studies show that the proposed approach including the design space and model is effective in producing large-scale high-quality PLPs layouts for mobile shopping platforms.
2
Reality Rifts: Wonder-ful Interfaces by Disrupting Perceptual Causality
Lung-Pan Cheng (National Taiwan University, Taipei, Taiwan)Yi Chen (National Taiwan University, Taipei, Taiwan)Yi-Hao Peng (Carnegie Mellon University, Pittsburgh, Pennsylvania, United States)Christian Holz (ETH Zürich, Zurich, Switzerland)
Reality Rifts are interfaces between the physical and the virtual reality, where incoherent observations of physical behavior lead users to imagine comprehensive and plausible end-to-end dynamics. Reality Rifts emerge in interactive physical systems that lack one or more components that are central to their operation, yet where the physical end-to-end interaction persists with plausible outcomes. Even in the presence of a Reality Rift, users can still interact with a system—much like they would with the unaltered and complete counterpart—leading them to implicitly infer the existence and imagine the behavior of the lacking components from observable phenomena and outcomes. Therefore, dynamic systems with Reality Rifts trigger doubt, curiosity, and rumination—a sense of wonder that users experience when observing a Reality Rift due to their innate curiosity. In this paper, we explore how interactive systems can elicit and guide the user's imagination by integrating Reality Rifts. We outline the design process for opening a Reality Rift in interactive physical systems, describe the resulting design space, and explore it through six characteristic prototypes. To understand to what extent and with which qualities these prototypes indeed induce a sense of wonder during an interaction, we evaluated \projectName\ in the form of a field deployment with 50 participants. We discuss participants' behavior and derive factors for the implementation of future wonder-ful experiences.
2
How Bold can we be? The impact of adjusting Font Grade on Readability in light and dark Polarities
Hilary Palmén (Google LLC, Mountain View, California, United States)Michael Dean. Gilbert (Google LLC, Mountain View, California, United States)Dave Crossland (Google LLC, Mountain View, California, United States)
Variable font file technology enables adjusting fonts on scaled axes that can include weight, and grade. While making text bold increases the character width, grade achieves boldness without increasing character width or causing text reflow. Through two studies with a total of 459 participants, we examined the effect of varying grade levels on both glancing and paragraph reading tasks in light and dark modes. We show that dark text on a light background (Light Mode) is read reliably faster than its polar opposite (Dark Mode). We found an effect of mode for both glance and paragraph reading and an effect of grade for LM with heavier, increased grade levels. Paragraph readers are not choosing, or preferring, LM over DM despite fluency benefits and reported visual clarity. Software designers can vary grade across the tested font formats to influence design aesthetics and user preferences without worrying about reducing reading fluency.
2
SwellSense: Creating 2.5D interactions with micro-capsule paper
Tingyu Cheng (Interactive Computing, Atlanta, Georgia, United States)Zhihan Zhang (University of Washington, Seattle, Washington, United States)Bingrui Zong (Georgia Institute of Technology, Atlanta, Georgia, United States)Yuhui Zhao (Georgia Institute of Technology, Atlanta, Georgia, United States)Zekun Chang (Cornell University, Ithaca, New York, United States)Ye Jun Kim (Georgia Institute of Technology, Atlanta, Georgia, United States)Clement Zheng (National University of Singapore, Singapore, Singapore, Singapore)Gregory D.. Abowd (Northeastern University, Boston, Massachusetts, United States)HyunJoo Oh (Georgia Institute of Technology, Atlanta, Georgia, United States)
In this paper, we propose SwellSense, a fabrication technique to screen print stretchable circuits onto a special micro-capsule paper, creating localized swelling patterns with sensing capabilities. This simple technique will allow users to create a wide range of paper-based tactile interactive devices, which are mostly maintaining 2D planar form factor but can also be curved or folded into 3D interactive artifacts. We first present the design guidelines to support various tactile interaction design including basic tactile graphic geometries, patterns with directional density, or finer interactive textures with embedded sensing such as touch sensor, pressure sensor, and mechanical switch. We then provide a design editor to enable users to design more creatively using the SwellSense technique. We provide a technical evaluation and user evaluation to validate the basic performance of SwellSense. Lastly, we demonstrate several application examples and conclude with a discussion on current limitations and future work.