注目の論文一覧

各カテゴリ上位30論文までを表示しています

The ACM CHI Conference on Human Factors in Computing Systems (https://chi2023.acm.org/)

15
FakeForward: Using Deepfake Technology for Feedforward Learning
Christopher Clarke (University of Bath, Bath, United Kingdom)Jingnan Xu (University of Bath, Bath, United Kingdom)Ye Zhu (University of Bath, Bath, United Kingdom)Karan Dharamshi (University of Bath, Bath, United Kingdom)Harry McGill (University of Bath, Bath, United Kingdom)Stephen Black (University of Bath, Bath, United Kingdom)Christof Lutteroth (University of Bath, Bath, United Kingdom)
Videos are commonly used to support learning of new skills, to improve existing skills, and as a source of motivation for training. Video self-modelling (VSM) is a learning technique that improves performance and motivation by showing a user a video of themselves performing a skill at a level they have not yet achieved. Traditional VSM is very data and labour intensive: a lot of video footage needs to be collected and manually edited in order to create an effective self-modelling video. We address this by presenting FakeForward -- a method which uses deepfakes to create self-modelling videos from videos of other people. FakeForward turns videos of better-performing people into effective, personalised training tools by replacing their face with the user’s. We investigate how FakeForward can be effectively applied and demonstrate its efficacy in rapidly improving performance in physical exercises, as well as confidence and perceived competence in public speaking.
13
PunchPrint: Creating Composite Fiber-Filament Craft Artifacts by Integrating Punch Needle Embroidery and 3D Printing
Ashley Del Valle (University of California Santa Barbara, Santa Barbara, California, United States)Mert Toka (University of California Santa Barbara, Santa Barbara, California, United States)Alejandro Aponte (University of California Santa Barbara, Santa Barbara, California, United States)Jennifer Jacobs (University of California Santa Barbara, Santa Barbara, California, United States)
New printing strategies have enabled 3D-printed materials that imitate traditional textiles. These filament-based textiles are easy to fabricate but lack the look and feel of fiber textiles. We seek to augment 3D-printed textiles with needlecraft to produce composite materials that integrate the programmability of additive fabrication with the richness of traditional textile craft. We present PunchPrint: a technique for integrating fiber and filament in a textile by combining punch needle embroidery and 3D printing. Using a toolpath that imitates textile weave structure, we print a flexible fabric that provides a substrate for punch needle production. We evaluate our material’s robustness through tensile strength and needle compatibility tests. We integrate our technique into a parametric design tool and produce functional artifacts that show how PunchPrint broadens punch needle craft by reducing labor in small, detailed artifacts, enabling the integration of openings and multiple yarn weights, and scaffolding soft 3D structures.
8
Imprimer: Computational Notebooks for CNC Milling
Jasper Tran O'Leary (University of Washington, Seattle, Washington, United States)Gabrielle Benabdallah (University of Washington, Seattle, Washington, United States)Nadya Peek (University of Washington, Seattle, Washington, United States)
Digital fabrication in industrial contexts involves standardized procedures that prioritize precision and repeatability. However, fabrication machines are now available for practitioners who focus instead on experimentation. In this paper, we reframe hobbyist CNC milling as writing literate programs which interleave documentation, interactive graphics, and source code for machine control. To test this approach, we present Imprimer, a machine infrastructure for a CNC mill and an associated library for a computational notebook. Imprimer lets makers learn experimentally, prototype new interactions for making, and understand physical processes by writing and debugging code. We demonstrate three experimental milling workflows as computational notebooks, conduct a user study with practitioners with a range of backgrounds, and discuss literate programming as a future vision for digital fabrication altogether.
6
3D Printable Play-Dough: New Biodegradable Materials and Creative Possibilities for Digital Fabrication
Leah Buechley (University of New Mexico, Albuquerque, New Mexico, United States)Ruby Ta (University of New Mexico, Albuquerque, New Mexico, United States)
Play-dough is a brightly-colored, easy-to-make, and familiar material. We have developed and tested custom play-dough materials that can be employed in 3D printers designed for clay. This paper introduces a set of recipes for 3D printable play-dough along with an exploration of these materials' print characteristics. We explore the design potential of play-dough as a sustainable fabrication material, highlighting its recyclability, compostability, and repairability. We demonstrate how custom-color prints can be designed and constructed and describe how play-dough can be used as a support material for clay 3D prints. We also present a set of example artifacts made from play-dough and discuss opportunities for future research.
5
Understanding Moderators' Conflict and Conflict Management Strategies with Streamers in Live Streaming Communities
Jie Cai (Penn State University, University Park, Pennsylvania, United States)Donghee Yvette Wohn (New Jersey Institute of Technology, Newark , New Jersey, United States)
As each micro community centered around the streamer attempts to set its own guidelines in live streaming communities, it is common for volunteer moderators (mods) and the streamer to disagree on how to handle various situations. In this study, we conducted an online survey (N=240) with live streaming mods to explore their commitment to the streamer to grow the micro community and the different styles in which they handle conflicts with the streamer. We found that 1) mods apply more active and cooperative styles than passive and assertive styles to manage conflicts, but they might be forced to do so, and 2) mods with strong commitments to the streamer would like to apply styles showing either high concerns for the streamer or low concerns for themselves. We reflect on how these results can affect micro community development and recommend designs to mitigate conflict and strengthen commitment.
5
Augmenting Human Cognition with an AI-Mediated Intelligent Visual Feedback
Songlin Xu (University of California, San Diego, San Diego, California, United States)Xinyu Zhang (University of California San Diego, San Diego, California, United States)
In this paper, we introduce an AI-mediated framework that can provide intelligent feedback to augment human cognition. Specifically, we leverage deep reinforcement learning (DRL) to provide adaptive time pressure feedback to improve user performance in a math arithmetic task. Time pressure feedback could either improve or deteriorate user performance by regulating user attention and anxiety. Adaptive time pressure feedback controlled by a DRL policy according to users' real-time performance could potentially solve this trade-off problem. However, the DRL training and hyperparameter tuning may require large amounts of data and iterative user studies. Therefore, we propose a dual-DRL framework that trains a regulation DRL agent to regulate user performance by interacting with another simulation DRL agent that mimics user cognition behaviors from an existing dataset. Our user study demonstrates the feasibility and effectiveness of the dual-DRL framework in augmenting user performance, in comparison to the baseline group.
4
CiteSee: Augmenting Citations in Scientific Papers with Persistent and Personalized Historical Context
Joseph Chee Chang (Allen Institute for AI, Seattle, Washington, United States)Amy X.. Zhang (University of Washington, Seattle, Washington, United States)Jonathan Bragg (Allen Institute for Artificial Intelligence, Seattle, Washington, United States)Andrew Head (University of Pennsylvania, Philadelphia, Pennsylvania, United States)Kyle Lo (Allen Institute for Artificial Intelligence, Seattle, Washington, United States)Doug Downey (Allen Institute for Artificial Intelligence, Seattle, Washington, United States)Daniel S. Weld (Allen Institute for Artificial Intelligence, Seattle, Washington, United States)
When reading a scholarly article, inline citations help researchers contextualize the current article and discover relevant prior work. However, it can be challenging to prioritize and make sense of the hundreds of citations encountered during literature reviews. This paper introduces CiteSee, a paper reading tool that leverages a user's publishing, reading, and saving activities to provide personalized visual augmentations and context around citations. First, CiteSee connects the current paper to familiar contexts by surfacing known citations a user had cited or opened. Second, CiteSee helps users prioritize their exploration by highlighting relevant but unknown citations based on saving and reading history. We conducted a lab study that suggests CiteSee is significantly more effective for paper discovery than three baselines. A field deployment study shows CiteSee helps participants keep track of their explorations and leads to better situational awareness and increased paper discovery via inline citation when conducting real-world literature reviews.
4
A Human-Computer Collaborative Editing Tool for Conceptual Diagrams
Lihang Pan (Tsinghua University, Beijing, China)Chun Yu (Tsinghua University, Beijing, China)Zhe He (Tsinghua University, Beijing, Beijing, China)Yuanchun Shi (Tsinghua University, Beijing, China)
Editing (e.g., editing conceptual diagrams) is a typical office task that requires numerous tedious GUI operations, resulting in poor interaction efficiency and user experience, especially on mobile devices. In this paper, we present a new type of human-computer collaborative editing tool (CET) that enables accurate and efficient editing with little interaction effort. CET divides the task into two parts, and the human and the computer focus on their respective specialties: the human describes high-level editing goals with multimodal commands, while the computer calculates, recommends, and performs detailed operations. We conducted a formative study (N = 16) to determine the concrete task division and implemented the tool on Android devices for the specific tasks of editing concept diagrams. The user study (N = 24 + 20) showed that it increased diagram editing speed by 32.75% compared with existing state-of-the-art commercial tools and led to better editing results and user experience.
4
“Should I Follow the Human, or Follow the Robot?” — Robots in Power Can Have More Influence Than Humans on Decision-Making
Yoyo Tsung-Yu. Hou (Cornell University, Ithaca, New York, United States)Wen-Ying Lee (Cornell University, Ithaca, New York, United States)Malte F. Jung (Cornell University, Ithaca, New York, United States)
Artificially intelligent (AI) agents such as robots are increasingly delegated power in work settings, yet it remains unclear how power functions in interactions with both humans and robots, especially when they directly compete for influence. Here we present an experiment where every participant was matched with one human and one robot to perform decision-making tasks. By manipulating who has power, we created three conditions: human as leader, robot as leader, and a no-power-difference control. The results showed that the participants were significantly more influenced by the leader, regardless of whether the leader was a human or a robot. However, they generally held a more positive attitude toward the human than the robot, although they considered whichever was in power as more competent. This study illustrates the importance of power for future Human-Robot Interaction (HRI) and Human-AI Interaction (HAI) research, as it addresses pressing concerns of society about AI-powered intelligent agents.
4
Feel the Force, See the Force: Exploring Visual-tactile Associations of Deformable Surfaces with Colours and Shapes
Cameron Steer (University of Bath, Bath, United Kingdom)Teodora Dinca (University of Bath, Bath, United Kingdom)Crescent Jicol (University of Bath, Bath, United Kingdom)Michael J. Proulx (University of Bath, Bath, United Kingdom)Jason Alexander (University of Bath, Bath, United Kingdom)
Deformable interfaces provide unique interaction potential for force input, for example, when users physically push into a soft display surface. However, there remains limited understanding of which visual-tactile design elements signify the presence and stiffness of such deformable force-input components. In this paper, we explore how people correspond surface stiffness to colours, graphical shapes, and physical shapes. We conducted a cross-modal correspondence (CC) study, where 30 participants associated different surface stiffnesses with colours and shapes. Our findings evidence the CCs between stiffness levels for a subset of the 2D/3D shapes and colours used in the study. We distil our findings in three design recommendations: (1) lighter colours should be used to indicate soft surfaces, and darker colours should indicate stiff surfaces; (2) rounded shapes should be used to indicate soft surfaces, while less-curved shapes should be used to indicate stiffer surfaces, and; (3) longer 2D drop-shadows should be used to indicate softer surfaces, while shorter drop-shadows should be used to indicate stiffer surfaces.
4
Co-Writing with Opinionated Language Models Affects Users' Views
Maurice Jakesch (Cornell University, Ithaca, New York, United States)Advait Bhat (Microsoft Research India, Bangalore, India)Daniel Buschek (University of Bayreuth, Bayreuth, Germany)Lior Zalmanson (Tel Aviv University, Tel Aviv, Tel Aviv District, Israel)Mor Naaman (Cornell Tech, New York, New York, United States)
If large language models like GPT-3 preferably produce a particular point of view, they may influence people's opinions on an unknown scale. This study investigates whether a language-model-powered writing assistant that generates some opinions more often than others impacts what users write -- and what they think. In an online experiment, we asked participants (N=1,506) to write a post discussing whether social media is good for society. Treatment group participants used a language-model-powered writing assistant configured to argue that social media is good or bad for society. Participants then completed a social media attitude survey, and independent judges (N=500) evaluated the opinions expressed in their writing. Using the opinionated language model affected the opinions expressed in participants' writing and shifted their opinions in the subsequent attitude survey. We discuss the wider implications of our results and argue that the opinions built into AI language technologies need to be monitored and engineered more carefully.
3
UndoPort: Exploring the Influence of Undo-Actions for Locomotion in Virtual Reality on the Efficiency, Spatial Understanding and User Experience
Florian Müller (LMU Munich, Munich, Germany)Arantxa Ye (LMU Munich, Munich, Germany)Dominik Schön (TU Darmstadt, Darmstadt, Germany)Julian Rasch (LMU Munich, Munich, Germany)
When we get lost in Virtual Reality (VR) or want to return to a previous location, we use the same methods of locomotion for the way back as for the way forward. This is time-consuming and requires additional physical orientation changes, increasing the risk of getting tangled in the headsets' cables. In this paper, we propose the use of undo actions to revert locomotion steps in VR. We explore eight different variations of undo actions as extensions of point\&teleport, based on the possibility to undo position and orientation changes together with two different visualizations of the undo step (discrete and continuous). We contribute the results of a controlled experiment with 24 participants investigating the efficiency and orientation of the undo techniques in a radial maze task. We found that the combination of position and orientation undo together with a discrete visualization resulted in the highest efficiency without increasing orientation errors.
3
Libraries of Things: Understanding the Challenges of Sharing Tangible Collections and the Opportunities for HCI
Lee Jones (Queen's University, Kingston, Ontario, Canada)Alaa Nousir (Queen's University, Kingston, Ontario, Canada)Tom Everrett (Ingenium - Canada's Museums of Science and Innovation, Ottawa, Ontario, Canada)Sara Nabil (Queen's University, Kingston, Ontario, Canada)
“Libraries of Things” are tangible collections of borrowable objects. There are many benefits to Libraries of Things such as making objects and skill-building accessible, reducing waste through the sharing of items, and saving costs associated with purchasing rarely-used items. We introduce the first HCI study of Library of Things by interviewing 23 librarians who run a variety of collections such as handheld tools, gear, and musical instruments – within public institutions and more grass-roots efforts in the private sector. In our findings, we discuss the challenges these collections experience in changing behavioural patterns from buying to borrowing, helping individuals `try new things', iterating to find sharable items, training staff, and manual intervention throughout the borrowing cycle. We present 5 opportunities for HCI research to support interactive skill-sharing, self-borrowing, maintenance recognition and cataloguing `things', organizing non-uniform inventories, and creating public-awareness. Further in-the-wild studies should also consider the tensions between the values of these organizations and low-cost convenient usage.
3
“I Won't Go Speechless”: Design Exploration on a Real-Time Text-To-Speech Speaking Tool for Videoconferencing
Wooseok Kim (KAIST, Daejeon, Korea, Republic of)Jian Jun (KAIST, Daejeon, Korea, Republic of)Minha Lee (KAIST, Daejeon, Korea, Republic of)Sangsu Lee (KAIST, Daejeon, Korea, Republic of)
The COVID-19 pandemic has shifted many business activities to non-face-to-face activities, and videoconferencing has become a new paradigm. However, conference spaces isolated from surrounding interferences are not always readily available. People frequently participate in public places with unexpected crowds or acquaintances, such as cafés, living rooms, and shared offices. These environments have surrounding limitations that potentially cause challenges in speaking up during videoconferencing. To alleviate these issues and support the users in speaking-restrained spatial contexts, we propose a text-to-speech (TTS) speaking tool as a new speaking method to support active videoconferencing participation. We derived the possibility of a TTS speaking tool and investigated the empirical challenges and user expectations of a TTS speaking tool using a technology probe and participatory design methodology. Based on our findings, we discuss the need for a TTS speaking tool and suggest design considerations for its application in videoconferencing.
3
Short-Form Videos Degrade Our Capacity to Retain Intentions: Effect of Context Switching On Prospective Memory
Francesco Chiossi (LMU Munich, Munich, Germany)Luke Haliburton (LMU Munich, Munich, Germany)Changkun Ou (LMU Munich, Munich, Germany)Andreas Martin. Butz (LMU Munich, Munich, Germany)Albrecht Schmidt (LMU Munich, Munich, Germany)
Social media platforms use short, highly engaging videos to catch users' attention. While the short-form video feeds popularized by TikTok are rapidly spreading to other platforms, we do not yet understand their impact on cognitive functions. We conducted a between-subjects experiment (N=60) investigating the impact of engaging with TikTok, Twitter, and YouTube while performing a Prospective Memory task (i.e., executing a previously planned action). The study required participants to remember intentions over interruptions. We found that the TikTok condition significantly degraded the users’ performance in this task. As none of the other conditions (Twitter, YouTube, no activity) had a similar effect, our results indicate that the combination of short videos and rapid context-switching impairs intention recall and execution. We contribute a quantified understanding of the effect of social media feed format on Prospective Memory and outline consequences for media technology designers to not harm the users’ memory and wellbeing.
3
What is Human-Centered about Human-Centered AI? A Map of the Research Landscape
Tara Capel (Queensland University of Technology, Brisbane, QLD, Australia)Margot Brereton (QUT, Brisbane, Brisbane, Australia)
The application of Artificial Intelligence (AI) across a wide range of domains comes with both high expectations of its benefits and dire predictions of misuse. While AI systems have largely been driven by a technology-centered design approach, the potential societal consequences of AI have mobilized both HCI and AI researchers towards researching human-centered artificial intelligence (HCAI). However, there remains considerable ambiguity about what it means to frame, design and evaluate HCAI. This paper presents a critical review of the large corpus of peer-reviewed literature emerging on HCAI in order to characterize what the community is defining as HCAI. Our review contributes an overview and map of HCAI research based on work that explicitly mentions the terms ‘human-centered artificial intelligence’ or ‘human-centered machine learning’ or their variations, and suggests future challenges and research directions. The map reveals the breadth of research happening in HCAI, established clusters and the emerging areas of Interaction with AI and Ethical AI. The paper contributes a new definition of HCAI, and calls for greater collaboration between AI and HCI research, and new HCAI constructs.
3
Birds of a Feather Video-Flock Together: Design and Evaluation of an Agency-Based Parrot-to-Parrot Video-Calling System for Interspecies Ethical Enrichment
Rebecca Kleinberger (Northeastern University, Boston, Massachusetts, United States)Jennifer Cunha (Parrot Kindergarten, Jupiter, Florida, United States)Megha M. Vemuri (Massachusetts Institute of Technology, Cambridge, Massachusetts, United States)Ilyena Hirskyj-Douglas (The University of Glasgow, Glasgow, United Kingdom)
Over 20 million parrots are kept as pets in the US, often lacking appropriate stimuli to meet their high social, cognitive, and emotional needs. After reviewing bird perception and agency literature, we developed an approach to allow parrots to engage in video-calling other parrots. Following a pilot experiment and expert survey, we ran a three-month study with 18 pet birds to evaluate the potential value and usability of a parrot-parrot video-calling system. We assessed the system in terms of perception, agency, engagement, and overall perceived benefits. With 147 bird-triggered calls, our results show that 1) every bird used the system, 2) most birds exhibited high motivation and intentionality, and 3) all caretakers reported perceived benefits, some arguably life-transformative, such as learning to forage or even to fly by watching others. We report on individual insights and propose considerations regarding ethics and the potential of parrot video-calling for enrichment.
3
User-Driven Constraints for Layout Optimisation in Augmented Reality
Aziz Niyazov (IRIT - University of Toulouse, Toulouse, France)Barrett Ens (Monash University, Melbourne, Australia)Kadek Ananta Satriadi (Monash University, Melbourne, Australia)Nicolas Mellado (CNRS, Toulouse, France)Loic Barthe (IRIT - University of Toulouse, Toulouse, France)Tim Dwyer (Monash University, Melbourne, VIC, Australia)Marcos Serrano (IRIT - Elipse, Toulouse, France)
Automatic layout optimisation allows users to arrange augmented reality content in the real-world environment without the need for tedious manual interactions. This optimisation is often based on modelling the intended content placement as constraints, defined as cost functions. Then, applying a cost minimization algorithm leads to a desirable placement. However, such an approach is limited by the lack of user control over the optimisation results. In this paper we explore the concept of user-driven constraints for augmented reality layout optimisation. With our approach users can define and set up their own constraints directly within the real-world environment. We first present a design space composed of three dimensions: the constraints, the regions of interest and the constraint parameters. Then we explore which input gestures can be employed to define the user-driven constraints of our design space through a user elicitation study. Using the results of the study, we propose a holistic system design and implementation demonstrating our user-driven constraints, which we evaluate in a final user study where participants had to create several constraints at the same time to arrange a set of virtual contents.
3
The Tactile Dimension: A Method for Physicalizing Touch Behaviors
Laura J. Perovich (Northeastern University, Boston, Massachusetts, United States)Bernice Rogowitz (Visual Perspectives Research , Westchester, New York, United States)Victoria Crabb (Northeastern University, Boston, Massachusetts, United States)Jack Vogelsang (Northeastern University, Boston, Massachusetts, United States)Sara Hartleben (Northeastern University, Boston, Massachusetts, United States)Dietmar Offenhuber (Northeastern University, Boston, Massachusetts, United States)
Traces of touch provide valuable insight into how we interact with the physical world. Measuring touch behavior, however, is expensive and imprecise. Utilizing a fluorescent UV tracer powder, we developed a low-cost analog method to capture persistent, high-contrast touch records on arbitrary objects. We describe our process for selecting a tracer, methods for capturing, enhancing, and aggregating traces, and approaches to examining qualitative aspects of the user experience. Three user studies demonstrate key features of this method. First, we show that it provides clear and durable traces on objects representative of scientific visualization, physicalization, and product design. Second, we demonstrate how this method could be used to study touch perception, by measuring how task and narrative framing elicit different touch behaviors on the same object. Third, we demonstrate how this method can be used to evaluate data physicalizations by observing how participants touch two different physicalizations of COVID-19 time-series data.
3
The Walking Talking Stick: Understanding Automated Note-Taking in Walking Meetings
Luke Haliburton (LMU Munich, Munich, Germany)Natalia Bartłomiejczyk (Lodz University of Technology, Lodz, Poland)Albrecht Schmidt (LMU Munich, Munich, Germany)Paweł W. Woźniak (Chalmers University of Technology, Gothenburg, Sweden)Jasmin Niess (University of St. Gallen, St. Gallen, Switzerland)
While walking meetings offer a healthy alternative to sit-down meetings, they also pose practical challenges. Taking notes is difficult while walking, which limits the potential of walking meetings. To address this, we designed the Walking Talking Stick---a tangible device with integrated voice recording, transcription, and a physical highlighting button to facilitate note-taking during walking meetings. We investigated our system in a three-condition between-subjects user study with thirty pairs of participants (N=60) who conducted 15-minute outdoor walking meetings. Participants either used clip-on microphones, the prototype without the button, or the prototype with the highlighting button. We found that the tangible device increased task focus, and the physical highlighting button facilitated turn-taking and resulted in more useful notes. Our work demonstrates how interactive artifacts can incentivize users to hold meetings in motion and enhance conversation dynamics. We contribute insights for future systems which support conducting work tasks in mobile environments
3
Exploring Challenges and Opportunities to Support Designers in Learning to Co-create with AI-based Manufacturing Design Tools
Frederic Gmeiner (Carnegie Mellon University, Pittsburgh, Pennsylvania, United States)Humphrey Yang (Carnegie Mellon University, Pittsburgh, Pennsylvania, United States)Lining Yao (Carnegie Mellon University, Pittsburgh, Pennsylvania, United States)Kenneth Holstein (Carnegie Mellon University, Pittsburgh, Pennsylvania, United States)Nikolas Martelaro (Carnegie Mellon University, Pittsburgh, Pennsylvania, United States)
AI-based design tools are proliferating in professional software to assist engineering and industrial designers in complex manufacturing and design tasks. These tools take on more agentic roles than traditional computer-aided design tools and are often portrayed as “co-creators.” Yet, working effectively with such systems requires different skills than working with complex CAD tools alone. To date, we know little about how engineering designers learn to work with AI-based design tools. In this study, we observed trained designers as they learned to work with two AI-based tools on a realistic design task. We find that designers face many challenges in learning to effectively co-create with current systems, including challenges in understanding and adjusting AI outputs and in communicating their design goals. Based on our findings, we highlight several design opportunities to better support designer-AI co-creation.
3
Bias-Aware Systems: Exploring Indicators for the Occurrences of Cognitive Biases when Facing Different Opinions
Nattapat Boonprakong (University of Melbourne, Parkville, Victoria, Australia)Xiuge Chen (The University of Melbourne, Melbourne, Victoria, Australia)Catherine Davey (University of Melbourne, Parkville, Victoria, Australia)Benjamin Tag (University of Melbourne, Melbourne, Victoria, Australia)Tilman Dingler (University of Melbourne, Melbourne, Victoria, Australia)
Cognitive biases have been shown to play a critical role in creating echo chambers and spreading misinformation. They undermine our ability to evaluate information and can influence our behaviour without our awareness. To allow the study of occurrences and effects of biases on information consumption behaviour, we explore indicators for cognitive biases in physiological and interaction data. Therefore, we conducted two experiments investigating how people experience statements that are congruent or divergent from their own ideological stance. We collected interaction data, eye tracking data, hemodynamic responses, and electrodermal activity while participants were exposed to ideologically tainted statements. Our results indicate that people spend more time processing statements that are incongruent with their own opinion. We detected differences in blood oxygenation levels between congruent and divergent opinions, a first step towards building systems to detect and quantify cognitive biases.
3
Can Voice Assistants Be Microaggressors? Cross-Race Psychological Responses to Failures of Automatic Speech Recognition
Kimi Wenzel (Carnegie Mellon University, Pittsburgh, Pennsylvania, United States)Nitya Devireddy (Carnegie Mellon University, Pittsburgh, Pennsylvania, United States)Cam Davison (Carnegie Mellon University, Pittsburgh, Pennsylvania, United States)Geoff Kaufman (Carnegie Mellon University, Pittsburgh, Pennsylvania, United States)
Language technologies have a racial bias, committing greater errors for Black users than for white users. However, little work has evaluated what effect these disparate error rates have on users themselves. The present study aims to understand if speech recognition errors in human-computer interactions may mirror the same effects as misunderstandings in interpersonal cross-race communication. In a controlled experiment (N=108), we randomly assigned Black and white participants to interact with a voice assistant pre-programmed to exhibit a high versus low error rate. Results revealed that Black participants in the high error rate condition, compared to Black participants in the low error rate condition, exhibited significantly higher levels of self-consciousness, lower levels of self-esteem and positive affect, and less favorable ratings of the technology. White participants did not exhibit this disparate pattern. We discuss design implications and the diverse research directions to which this initial study aims to contribute.
3
ChameleonControl: Teleoperating Real Human Surrogates through Mixed Reality Gestural Guidance for Remote Hands-on Classrooms
Mehrad Faridan (University of Calgary, Calgary, Alberta, Canada)Bheesha Kumari (University of Calgary, Calgary, Alberta, Canada)Ryo Suzuki (University of Calgary, Calgary, Alberta, Canada)
We present ChameleonControl, a real-human teleoperation system for scalable remote instruction in hands-on classrooms. In contrast to existing video or AR/VR-based remote hands-on education, ChameleonControl uses a real human as a surrogate of a remote instructor. Building on existing human-based telepresence approaches, we contribute a novel method to teleoperate a human surrogate through synchronized mixed reality hand gestural navigation and verbal communication. By overlaying the remote instructor's virtual hands in the local user's MR view, the remote instructor can guide and control the local user as if they were physically present. This allows the local user/surrogate to synchronize their hand movements and gestures with the remote instructor, effectively teleoperating a real human. We deploy and evaluate our system in classrooms of physiotherapy training, as well as other application domains such as mechanical assembly, sign language and cooking lessons. The study results confirm that our approach can increase engagement and the sense of co-presence, showing potential for the future of remote hands-on classrooms.
3
GestureExplorer: Immersive Visualisation and Exploration of Gesture Data
Ang Li (Monash University , Melbounre, Australia)Jiazhou Liu (Monash University, Melbourne, VIC, Australia)Maxime Cordeil (The University Of Queensland, Brisbane, Australia)Jack Topliss (University of Canterbury, Christchurch, Canterbury, New Zealand)Thammathip Piumsomboon (University of Canterbury, Christchurch, Canterbury, New Zealand)Barrett Ens (Monash University, Melbourne, Australia)
This paper presents the design and evaluation of GestureExplorer, an Immersive Analytics tool that supports the interactive exploration, classification and sensemaking with large sets of 3D temporal gesture data. GestureExplorer features 3D skeletal and trajectory visualisations of gestures combined with abstract visualisations of clustered sets of gestures. By leveraging the large immersive space afforded by a Virtual Reality interface our tool allows free navigation and control of viewing perspective for users to gain a better understanding of gestures. We explored a selection of classification methods to provide an overview of the dataset that was linked to a detailed view of the data that showed different visualisation modalities. We evaluated GestureExplorer with two user studies and collected feedback from participants with diverse visualisation and analytics backgrounds. Our results demonstrated the promising capability of GestureExplorer for providing a useful and engaging experience in exploring and analysing gesture data.
3
The Intricacies of Social Robots: Secondary Analysis of Fictional Documentaries to Explore the Benefits and Challenges of Robots in Complex Social Settings
Judith Dörrenbächer (University of Siegen, Siegen, Germany)Ronda Ringfort-Felner (University of Siegen, Siegen, Germany)Marc Hassenzahl (University of Siegen, Siegen, Germany)
In the design of social robots, the focus is often on the robot itself rather than on the intricacies of possible application scenarios. In this paper, we examine eight fictional documentaries about social robots, such as SEYNO, a robot that promotes respect between passengers in trains, or PATO, a robot to watch movies with. Overall, robots were conceptualized either (1) to substitute humans in relationships or (2) to mediate relationships (human-human-robot-interaction). While the former is basis of many current approaches to social robotics, the latter is less common, but particularly interesting. For instance, the mediation perspective fundamentally impacts the role a robot takes (e.g., role model, black sheep, ally, opponent, moralizer) and thus its potential function and form. From the substitution perspective, robots are expected to mimic human emotions; from the mediation perspective, robots can be positive precisely because they remain objective and are neither emotional nor empathic.
3
Subjective Probability Correction for Uncertainty Representations
Fumeng Yang (Northwestern University, Evanston, Illinois, United States)Maryam Hedayati (Northwestern University, Evanston, Illinois, United States)Matthew Kay (Northwestern University, Chicago, Illinois, United States)
We propose a new approach to uncertainty communication: we keep the uncertainty representation fixed, but adjust the distribution displayed to compensate for biases in people’s subjective probability in decision-making. To do so, we adopt a linear-in-probit model of subjective probability and derive two corrections to a Normal distribution based on the model’s intercept and slope: one correcting all right-tailed probabilities, and the other preserving the mode and one focal probability. We then conduct two experiments on U.S. demographically-representative samples. We show participants hypothetical U.S. Senate election forecasts as text or a histogram and elicit their subjective probabilities using a betting task. The first experiment estimates the linear-in-probit intercepts and slopes, and confirms the biases in participants’ subjective probabilities. The second, preregistered follow-up shows participants the bias-corrected forecast distributions. We find the corrections substantially improve participants’ decision quality by reducing the integrated absolute error of their subjective probabilities compared to the true probabilities. These corrections can be generalized to any univariate probability or confidence distribution, giving them broad applicability. Our preprint, code, data, and preregistration are available at https://doi.org/10.17605/osf.io/kcwxm.
3
Smartphone-derived Virtual Keyboard Dynamics Coupled with Accelerometer Data as a Window into Understanding Brain Health
Emma Ning (University of Illinois at Chicago, Chicago, Illinois, United States)Andrea T. Cladek (University of Illinois at Chicago, Chicago, Illinois, United States)Mindy K. Ross (University of Illinois at Chicago, Chicago, Illinois, United States)Sarah Kabir (University of Illinois at Chicago, Chicago, Illinois, United States)Amruta Barve (University of Illinois at Chicago, Chicago, Illinois, United States)Ellyn Kennelly (Wayne State University, Detroit, Michigan, United States)Faraz Hussain (University of Illinois at Chicago, Chicago, Illinois, United States)Jennifer Duffecy (University of Illinois at Chicago, Chicago, Illinois, United States)Scott Langenecker (University of Utah, Salt Lake City, Utah, United States)Theresa Nguyen (University of Illinois at Chicago, Chicago, Illinois, United States)Theja Tulabandhula (University of Illinois at Chicago, Chicago, Illinois, United States)John Zulueta (University of Illinois at Chicago, Chicago, Illinois, United States)Olusola A. Ajilore (University of Illinois, Chicago (UIC), Chicago, Illinois, United States)Alexander P. Demos (University of Illinois at Chicago, Chicago, Illinois, United States)Alex Leow (University of Illinois, Chicago (UIC), Chicago, Illinois, United States)
We examine the feasibility of using accelerometer data exclusively collected during typing on a custom smartphone keyboard to study whether typing dynamics are associated with daily variations in mood and cognition. As part of an ongoing digital mental health study involving mood disorders, we collected data from a well-characterized clinical sample (N = 85) and classified accelerometer data per typing session into orientation (upright vs. not) and motion (active vs. not). The mood disorder group showed lower cognitive performance despite mild symptoms (depression/mania). There were also diurnal pattern differences with respect to cognitive performance: individuals with higher cognitive performance typed faster and were less sensitive to time of day. They also exhibited more well-defined diurnal patterns in smartphone keyboard usage: they engaged with the keyboard more during the day and tapered their usage more at night compared to those with lower cognitive performance, suggesting a healthier usage of their phone.
3
UEyes: Understanding Visual Saliency across User Interface Types
Yue Jiang (Aalto University, Espoo, Finland)Luis A.. Leiva (University of Luxembourg, Esch-sur-Alzette, Luxembourg)Hamed Rezazadegan Tavakoli (Nokia Technologies, Espoo, Finland)Paul R. B. Houssel (University of Luxembourg, Esch-sur-Alzette, Luxembourg)Julia Kylmälä (Aalto University, Espoo, Finland)Antti Oulasvirta (Aalto University, Helsinki, Finland)
While user interfaces (UIs) display elements such as images and text in a grid-based layout, UI types differ significantly in the number of elements and how they are displayed. For example, webpage designs rely heavily on images and text, whereas desktop UIs tend to feature numerous small images. To examine how such differences affect the way users look at UIs, we collected and analyzed a large eye-tracking-based dataset, \textit{UEyes} (62 participants and 1,980 UI screenshots), covering four major UI types: webpage, desktop UI, mobile UI, and poster. We analyze its differences in biases related to such factors as color, location, and gaze direction. We also compare state-of-the-art predictive models and propose improvements for better capturing typical tendencies across UI types. Both the dataset and the models are publicly available.
2
SwellSense: Creating 2.5D interactions with micro-capsule paper
Tingyu Cheng (Interactive Computing, Atlanta, Georgia, United States)Zhihan Zhang (University of Washington, Seattle, Washington, United States)Bingrui Zong (Georgia Institute of Technology, Atlanta, Georgia, United States)Yuhui Zhao (Georgia Institute of Technology, Atlanta, Georgia, United States)Zekun Chang (Cornell University, Ithaca, New York, United States)Ye Jun Kim (Georgia Institute of Technology, Atlanta, Georgia, United States)Clement Zheng (National University of Singapore, Singapore, Singapore, Singapore)Gregory D.. Abowd (Northeastern University, Boston, Massachusetts, United States)HyunJoo Oh (Georgia Institute of Technology, Atlanta, Georgia, United States)
In this paper, we propose SwellSense, a fabrication technique to screen print stretchable circuits onto a special micro-capsule paper, creating localized swelling patterns with sensing capabilities. This simple technique will allow users to create a wide range of paper-based tactile interactive devices, which are mostly maintaining 2D planar form factor but can also be curved or folded into 3D interactive artifacts. We first present the design guidelines to support various tactile interaction design including basic tactile graphic geometries, patterns with directional density, or finer interactive textures with embedded sensing such as touch sensor, pressure sensor, and mechanical switch. We then provide a design editor to enable users to design more creatively using the SwellSense technique. We provide a technical evaluation and user evaluation to validate the basic performance of SwellSense. Lastly, we demonstrate several application examples and conclude with a discussion on current limitations and future work.
2
Amortized Inference with User Simulations
Hee-Seung Moon (Yonsei University, Incheon, Korea, Republic of)Antti Oulasvirta (Aalto University, Helsinki, Finland)Byungjoo Lee (Yonsei University, Seoul, Korea, Republic of)
There have been significant advances in simulation models predicting human behavior across various interactive tasks. One issue remains, however: identifying the parameter values that best describe an individual user. These parameters often express personal cognitive and physiological characteristics, and inferring their exact values has significant effects on individual-level predictions. Still, the high complexity of simulation models usually causes parameter inference to consume prohibitively large amounts of time, as much as days per user. We investigated amortized inference for its potential to reduce inference time dramatically, to mere tens of milliseconds. Its principle is to pre-train a neural proxy model for probabilistic inference, using synthetic data simulated from a range of parameter combinations. From examining the efficiency and prediction performance of amortized inference in three challenging cases that involve real-world data (menu search, point-and-click, and touchscreen typing), the paper demonstrates that an amortized inference approach permits analyzing large-scale datasets by means of simulation models. It also addresses emerging opportunities and challenges in applying amortized inference in HCI.
2
Crownboard: A One-Finger Crown-Based Smartwatch Keyboard for Users with Limited Dexterity
Gulnar Rakhmetulla (University of California, Merced, Merced, California, United States)Ahmed Sabbir. Arif (University of California, Merced, Merced, California, United States)
Mobile text entry is difficult for people with motor impairments due to limited access to smartphones and the need for precise target selection on touchscreens. Text entry on smartwatches, on the other hand, has not been well explored for the population. Crownboard enables people with limited dexterity enter text on a smartwatch using its crown. It uses an alphabetical layout divided into eight zones around the bezel. The zones are scanned either automatically or manually by rotating the crown, then selected by pressing the crown. Crownboard decodes zone sequences into words and displays word suggestions. We validated its design in multiple studies. First, a comparison between manual and automated scanning revealed that manual scanning is faster and more accurate. Second, a comparison between clockwise and shortest-path scanning identified the former to be faster and more accurate. In the final study with representative users, only 30% participants could use the default Qwerty. They were 9% and 23% faster with manual and automated Crownboard, respectively. All participants were able to use both variants of Crownboard.
2
Characteristics of Deep and Skim Reading on Smartphones vs. Desktop: A Comparative Study
Xiuge Chen (The University of Melbourne, Melbourne, Victoria, Australia)Namrata Srivastava (Monash University, Melbourne, Victoria, Australia)Rajiv Jain (Adobe Research, College Park, Maryland, United States)Jennifer Healey (Adobe Research, San Jose, California, United States)Tilman Dingler (University of Melbourne, Melbourne, Victoria, Australia)
Deep reading fosters text comprehension, memory, and critical thinking. The growing prevalance of digital reading on mobile interfaces raises concerns that deep reading is being replaced by skimming and sifting through information, but this is currently unmeasured. Traditionally, reading quality is assessed using comprehension tests, which require readers to explicitly answer a set of carefully composed questions. To quantify and understand reading behaviour in natural settings and at scale, however, implicit measures are needed of deep versus skim reading across desktop and mobile devices, the most prominent digital reading platforms. In this paper, we present an approach to systematically induce deep and skim reading and subsequently train classifiers to discriminate these two reading styles based on eye movement patterns and interaction data. Based on a user study with 29 participants, we created models that detect deep reading on both devices with up to 0.82 AUC. We present the characteristics of deep reading and discuss how our models can be used to measure the effect of reading UI design and monitor long-term changes in reading behaviours.
2
“It can bring you in the right direction”: Episode-Driven Data Narratives to Help Patients Navigate Multidimensional Diabetes Data to Make Care Decisions
Shriti Raj (University of Michigan, Ann Arbor, Michigan, United States)Toshi Gupta (University of Michigan, Ann Arbor, Michigan, United States)Joyce Lee (University of Michigan, Ann Arbor, Michigan, United States)Matthew Kay (Northwestern University, Chicago, Illinois, United States)Mark W. Newman (U. of Michigan, Ann Arbor, Michigan, United States)
Engaging with multiple streams of personal health data to inform self-care of chronic health conditions remains a challenge. Existing informatics tools provide limited support for patients to make data actionable. To design better tools, we conducted two studies with Type 1 diabetes patients and their clinicians. In the first study, we observed data review sessions between patients and clinicians to articulate the tasks involved in assessing different types of data from diabetes devices to make care decisions. Drawing upon these tasks, we designed novel data interfaces called episode-driven data narratives and performed a task-driven evaluation. We found that as compared to the commercially available diabetes data reports, episode-driven data narratives improved engagement and decision-making with data. We discuss implications for designing data interfaces to support interaction with multidimensional health data to inform self-care.
2
How Bold can we be? The impact of adjusting Font Grade on Readability in light and dark Polarities
Hilary Palmén (Google LLC, Mountain View, California, United States)Michael Dean. Gilbert (Google LLC, Mountain View, California, United States)Dave Crossland (Google LLC, Mountain View, California, United States)
Variable font file technology enables adjusting fonts on scaled axes that can include weight, and grade. While making text bold increases the character width, grade achieves boldness without increasing character width or causing text reflow. Through two studies with a total of 459 participants, we examined the effect of varying grade levels on both glancing and paragraph reading tasks in light and dark modes. We show that dark text on a light background (Light Mode) is read reliably faster than its polar opposite (Dark Mode). We found an effect of mode for both glance and paragraph reading and an effect of grade for LM with heavier, increased grade levels. Paragraph readers are not choosing, or preferring, LM over DM despite fluency benefits and reported visual clarity. Software designers can vary grade across the tested font formats to influence design aesthetics and user preferences without worrying about reducing reading fluency.
2
Predicting Gaze-based Target Selection in Augmented Reality Headsets based on Eye and Head Endpoint Distributions
Yushi Wei (Xi'an Jiaotong-Liverpool University, Suzhou, China)Rongkai Shi (Xi'an Jiaotong-Liverpool University, Suzhou, Jiangsu, China)Difeng Yu (University of Melbourne, Melbourne, Victoria, Australia)Yihong Wang (Xi'an Jiaotong-Liverpool University, Suzhou, China)Yue Li (Xi'an Jiaotong-Liverpool University, Suzhou, Jiangsu, China)Lingyun Yu (Xi'an Jiaotong-Liverpool University, Suzhou, Jiangsu, China)Hai-Ning Liang (Xi'an Jiaotong-Liverpool University, Suzhou, Jiangsu, China)
Target selection is a fundamental task in interactive Augmented Reality (AR) systems. Predicting the intended target of selection in such systems can provide users with a smooth, low-friction interaction experience. Our work aims to predict gaze-based target selection in AR headsets with eye and head endpoint distributions, which describe the probability distribution of eye and head 3D orientation when a user triggers a selection input. We first conducted a user study to collect users’ eye and head behavior in a gaze-based pointing selection task with two confirmation mechanisms (air tap and blinking). Based on the study results, we then built two models: a unimodal model using only eye endpoints and a multimodal model using both eye and head endpoints. Results from a second user study showed that the pointing accuracy is improved by approximately 32% after integrating our models into gaze-based selection techniques.
2
Marking Material Interactions with Computer Vision
Peter Gyory (University of Colorado Boulder, Boulder, Colorado, United States)S. Sandra Bae (University of Colorado Boulder, Boulder, Colorado, United States)Ruhan Yang (University of Colorado Boulder, Boulder, Colorado, United States)Ellen Yi-Luen Do (University of Colorado Boulder, Boulder, Colorado, United States)Clement Zheng (National University of Singapore, Singapore, Singapore, Singapore)
The electronics-centered approach to physical computing presents challenges when designers build tangible interactive systems due to its inherent emphasis on circuitry and electronic components. To explore an alternative physical computing approach we have developed a computer vision (CV) based system that uses a webcam, computer, and printed fiducial markers to create functional tangible interfaces. Through a series of design studios, we probed how designers build tangible interfaces with this CV-driven approach. In this paper, we apply the annotated portfolio method to reflect on the fifteen outcomes from these studios. We observed that CV markers offer versatile materiality for tangible interactions, afford the use of democratic materials for interface construction, and engage designers in embodied debugging with their own vision as a proxy for CV. By sharing our insights, we inform other designers and educators who seek alternative ways to facilitate physical computing and tangible interaction design.
2
Visible Nuances: A Caption System to Visualize Paralinguistic Speech Cues for Deaf and Hard-of-Hearing Individuals
JooYeong Kim (Gwangju Institute of Science and Technology, Gwangju, Korea, Republic of)Sooyeon Ahn (Gwangju Institute of Science and Technology, Gwangju, Korea, Republic of)Jin-Hyuk Hong (Gwangju Institute of Science and Technology, Gwangju, Korea, Republic of)
Captions help deaf and hard-of-hearing (DHH) individuals visually communicate voice information to better understand video content. In speech, the literal content and paralinguistic cues (e.g., pitch and nuance) work together to create real intention. However, current captions are limited in their capacity to deliver fine nuances because they cannot fully convey these paralinguistic cues. This paper proposes an audio-visualized caption system that automatically visualizes paralinguistic cues into various caption elements (thickness, height, font type and motion). A comparative study with 20 DHH participants demonstrates how our system supports DHH individuals to be better accessible to paralinguistic cues while watching videos. Particularly in the case of formal talks, they could accurately identify the speaker’s nuance more often compared to current captions, without any practice or training. Addressing some issues on legibility and familiarity, the proposed caption system has potentials to enrich DHH individuals’ video watching experience more as hearing people enjoy.
2
Take My Hand: Automated Hand-Based Spatial Guidance for the Visually Impaired
Adil Rahman (University of Virginia, Charlottesville, Virginia, United States)Md Aashikur Rahman Azim (University of Virginia, Charlottesville, Virginia, United States)Seongkook Heo (University of Virginia, Charlottesville, Virginia, United States)
Tasks that involve locating objects and then moving hands to those specific locations, such as using touchscreens or grabbing objects on a desk, are challenging for the visually impaired. Over the years, audio guidance and haptic feedback have been a staple in hand navigation based assistive technologies. However, these methods require the user to interpret the generated directional cues and then manually perform the hand motions. In this paper, we present automated hand-based spatial guidance to bridge the gap between guidance and execution, allowing visually impaired users to move their hands between two points automatically, without any manual effort. We implement this concept through FingerRover, an on-finger miniature robot that carries the user's finger to target points. We demonstrate the potential applications that can benefit from automated hand-based spatial guidance. Our user study shows the potential of our technique in improving the interaction capabilities of people with visual impairments.
2
Imagine That! Imaginative Suggestibility Affects Presence in Virtual Reality
Crescent Jicol (University of Bath, Bath, United Kingdom)Christopher Clarke (University of Bath, Bath, United Kingdom)Emilia Tor (University of Bath , Bath, United Kingdom)Hiu Lam Yip (University of Bath, Bath, United Kingdom)Jinha Yoon (University of Bath, Bath, Somerset, United Kingdom)Chris Bevan (University of Bristol, Bristol, United Kingdom)Hugh Bowden (King's College London, London, United Kingdom)Elisa Brann (King's College London, London, United Kingdom)Kirsten Cater (University of Bristol, Bristol, United Kingdom)Richard Cole (University of Bristol, Bristol, United Kingdom)Quinton Deeley (King's College London, London, United Kingdom)Esther Eidinow (University of Bristol , Bristol, United Kingdom)Eamonn O'Neill (University of Bath, Bath, United Kingdom)Christof Lutteroth (University of Bath, Bath, United Kingdom)Michael J. Proulx (University of Bath, Bath, United Kingdom)
Personality characteristics can affect how much presence an individual experiences in virtual reality, and researchers have explored how it may be possible to prime users to increase their sense of presence. A personality characteristic that has yet to be explored in the VR literature is imaginative suggestibility, the ability of an individual to successfully experience an imaginary scenario as if it were real. In this paper, we explore how suggestibility and priming affect presence when consulting an ancient oracle in VR as part of an educational experience -- a common VR application. We show for the first time how imaginative suggestibility is a major factor which affects presence and emotions experienced in VR, while priming cues have no effect on participants' (n=128) user experience, contrasting results from prior work. We consider the impacts of these findings for VR design and provide guidelines based on our results.
2
“I normally wouldn't talk with strangers”: Introducing a Socio-Spatial Interface for Fostering Togetherness Between Strangers
Ge Guo (Cornell University, Ithaca, New York, United States)Gilly Leshed (Cornell University, Ithaca, New York, United States)Keith Evan. Green (Cornell University, Ithaca, New York, United States)
Interacting with strangers can be beneficial but also challenging. Fortunately, these challenges can lead to design opportunities. In this paper, we present the design and evaluation of a socio-spatial interface, SocialStools, that leverages the human propensity for embodied interaction to foster togetherness between strangers. SocialStools is an installation of three responsive stools on caster wheels that generate sound and imagery in the near environment as three strangers sit on them, move them, and rotate them relative to each other. In our study with 12 groups of three strangers, we found a sense of togetherness emerged through interaction, evidenced by different patterns of socio-spatial movements, verbal communication, non-verbal behavior, and interview responses. We present our findings, articulate reasons for the cultivation of togetherness, consider the unique social affordances of our spatial interface in shifting attention during interpersonal communication, and provide design implications. This research contributes insights toward designing cyber-physical interfaces that foster interaction and togetherness among strangers at a time when cultivating togetherness is especially critical.
2
Love on the spectrum: Toward Inclusive online dating experience of autistic individuals
Dasom Choi (KAIST, Dajeon, Korea, Republic of)Sung-In Kim (Seoul Dasiseogi Homless Support Center , Seoul, Korea, Republic of)Sunok Lee (KAIST, Daejeon, Korea, Republic of)Hyunseung Lim (KAIST, Daejeon, Korea, Republic of)Hee Jeong Yoo (Seoul National University Bundang Hospital, Seongnam, Korea, Republic of)Hwajung Hong (KAIST, Deajeon, Korea, Republic of)
Online dating is a space where autistic individuals can find romantic partners with reduced social demands. Autistic individuals are often expected to adapt their behaviors to the social norms underlying the online dating platform to appear as desirable romantic partners. However, given that their autistic traits can lead them to different expectations of dating, it is uncertain whether conforming their behaviors to the norm will guide them to the person they truly want. In this paper, we explored the perceptions and expectations of autistic adults in online dating through interviews and workshops. We found that autistic people desired to know whether they behaved according to the platform's norms. Still, they expected to keep their unique characteristics rather than unconditionally conform to the norm. We conclude by providing suggestions for designing inclusive online dating experiences that could foster self-guided decisions of autistic users and embrace their unique characteristics.
2
How Instructional Data Physicalization Fosters Reflection in Personal Informatics
Marit Bentvelzen (Utrecht University, Utrecht, Netherlands)Julia Dominiak (Lodz University of Technology, Łódź, Poland)Jasmin Niess (University of St. Gallen, St. Gallen, Switzerland)Frederique Henraat (Utrecht University, Utrecht, Netherlands)Paweł W. Woźniak (Chalmers University of Technology, Gothenburg, Sweden)
The ever-increasing number of devices quantifying our lives offers a perspective of high awareness of one's wellbeing, yet it remains a challenge for personal informatics (PI) to effectively support data-based reflection. Effective reflection is recognised as a key factor for PI technologies to foster wellbeing. Here, we investigate whether building tangible representations of health data can offer engaging and reflective experiences. We conducted a between-subjects study where n=60 participants explored their immediate blood pressure data in relation to medical norms. They either used a standard mobile app, built a data representation from LEGO bricks based on instructions, or completed a free-form brick build. We found that building with instructions fostered more comparison and using bricks fostered focused attention. The free-form condition required extra time to complete, and lacked usability. Our work shows that designing instructional physicalisation experiences for PI is a means of improving engagement and understanding of personal data.
2
Reality Rifts: Wonder-ful Interfaces by Disrupting Perceptual Causality
Lung-Pan Cheng (National Taiwan University, Taipei, Taiwan)Yi Chen (National Taiwan University, Taipei, Taiwan)Yi-Hao Peng (Carnegie Mellon University, Pittsburgh, Pennsylvania, United States)Christian Holz (ETH Zürich, Zurich, Switzerland)
Reality Rifts are interfaces between the physical and the virtual reality, where incoherent observations of physical behavior lead users to imagine comprehensive and plausible end-to-end dynamics. Reality Rifts emerge in interactive physical systems that lack one or more components that are central to their operation, yet where the physical end-to-end interaction persists with plausible outcomes. Even in the presence of a Reality Rift, users can still interact with a system—much like they would with the unaltered and complete counterpart—leading them to implicitly infer the existence and imagine the behavior of the lacking components from observable phenomena and outcomes. Therefore, dynamic systems with Reality Rifts trigger doubt, curiosity, and rumination—a sense of wonder that users experience when observing a Reality Rift due to their innate curiosity. In this paper, we explore how interactive systems can elicit and guide the user's imagination by integrating Reality Rifts. We outline the design process for opening a Reality Rift in interactive physical systems, describe the resulting design space, and explore it through six characteristic prototypes. To understand to what extent and with which qualities these prototypes indeed induce a sense of wonder during an interaction, we evaluated \projectName\ in the form of a field deployment with 50 participants. We discuss participants' behavior and derive factors for the implementation of future wonder-ful experiences.
2
Layout Generation for Various Scenarios in Mobile Shopping Apps
Qianzhi Jing (College of Computer Science and Technology, HangZhou, China)TingTing Zhou (Alibaba Group, Hangzhou, Zhejiang, China)Yixin Zeng (Zhejiang University, Hangzhou, China)Liuqing Chen (Zhejiang University, Hangzhou, Zhejiang, China)Lingyun Sun (Zhejiang University, Hangzhou, China)Yankun Zhen (Alibaba Group, Hangzhou, Zhejiang, China)Yichun Du (Alibaba Group, Hangzhou, China)
Layout is essential for the product listing pages (PLPs) in mobile shopping applications. To clearly convey the information that consumers require and to achieve specific functions, PLPs layouts often have many variations driven by scenarios. In this work, we study the PLPs layout design for different scenarios and propose a design space to guide the large-scale creation of PLPs. We propose LayoutVQ-VAE, a novel model specialized in generating layouts with internal and external constraints. LayoutVQ-VAE differs from previous methods as it learns a discrete latent representation of layout and can model the relationship between layout representation and scenarios without applying heuristics. Experiments on publicly available benchmarks for different layout types validate that our method performs comparably or favorably against the state-of-the-art methods. Case studies show that the proposed approach including the design space and model is effective in producing large-scale high-quality PLPs layouts for mobile shopping platforms.
2
Nooks: Social Spaces to Lower Hesitations in Interacting with New People at Work
Shreya Bali (Carnegie Mellon University, Pittsburgh, Pennsylvania, United States)Pranav Khadpe (Carnegie Mellon University, Pittsburgh, Pennsylvania, United States)Geoff Kaufman (Carnegie Mellon University, Pittsburgh, Pennsylvania, United States)Chinmay Kulkarni (Carnegie Mellon University, Pittsburgh, Pennsylvania, United States)
Initiating conversations with new people at work is often intimidating because of uncertainty about their interests. People worry others may reject their attempts to initiate conversation or that others may not enjoy the conversation. We introduce a new system, Nooks, built on Slack, that reduces fear of social evaluation by enabling individuals to initiate any conversation as a nook—a conversation room that identifies its topic, but not its creator. Automatically convening others interested in the nook, Nooks further reduces fears of social evaluation by guaranteeing individuals in advance that others they are about to interact with are interested in the conversation. In a multi-month deployment with participants in a summer research program, Nooks provided participants with non-threatening and inclusive interaction opportunities, and ambient awareness, leading to new interactions online and offline. Our results demonstrate how intentionally designed social spaces can reduce fears of social evaluation and catalyze new workplace connections.
2
InfinitePaint: Painting in Virtual Reality with Passive Haptics Using Wet Brushes and a Physical Proxy Canvas
Andreas Rene. Fender (ETH Zürich, Zurich, Switzerland)Thomas Roberts (ETH Zürich, Zurich, Switzerland)Tiffany Luong (ETH Zürich, Zürich, Switzerland)Christian Holz (ETH Zürich, Zurich, Switzerland)
Digital painting interfaces require an input fidelity that preserves the artistic expression of the user. Drawing tablets allow for precise and low-latency sensing of pen motions and other parameters like pressure to convert them to fully digitized strokes. A drawback is that those interfaces are rigid. While soft brushes can be simulated in software, the haptic sensation of the rigid pen input device is different compared to using a soft wet brush on paper. We present InfinitePaint, a system that supports digital painting in Virtual Reality on real paper with a real wet brush. We use special paper that turns black wherever it comes into contact with water and turns blank again upon drying. A single camera captures those temporary strokes and digitizes them while applying properties like color or other digital effects. We tested our system with artists and compared the subjective experience with a drawing tablet.
2
Supporting Piggybacked Co-Located Leisure Activities via Augmented Reality
Samantha Reig (Carnegie Mellon University, Pittsburgh, Pennsylvania, United States)Erica Principe Cruz (Carnegie Mellon University, Pittsburgh, Pennsylvania, United States)Melissa Powers (New York University, New York, New York, United States)Jennifer He (Stanford University, Stanford, California, United States)Timothy Chong (University of Washington, Seattle, Washington, United States)Yu Jiang Tham (Snap Inc., Seattle, Washington, United States)Sven Kratz (Snap, Inc., Seattle, Washington, United States)Ava Robinson (Northwestern University, Evanston, Illinois, United States)Brian A.. Smith (Columbia University, New York, New York, United States)Rajan Vaish (Snap Inc., Santa Monica, California, United States)Andrés Monroy-Hernández (Princeton University, Princeton, New Jersey, United States)
Technology, especially the smartphone, is villainized for taking meaning and time away from in-person interactions and secluding people into "digital bubbles''. We believe this is not an intrinsic property of digital gadgets, but evidence of a lack of imagination in technology design. Leveraging augmented reality (AR) toward this end allows us to create experiences for multiple people, their pets, and their environments. In this work, we explore the design of AR technology that "piggybacks'' on everyday leisure to foster co-located interactions among close ties (with other people and pets). We designed, developed, and deployed three such AR applications, and evaluated them through a 41-participant and 19-pet user study. We gained key insights about the ability of AR to spur and enrich interaction in new channels, the importance of customization, and the challenges of designing for the physical aspects of AR devices (e.g., holding smartphones). These insights guide design implications for the novel research space of co-located AR.
2
What does it mean to cycle in Virtual Reality? Exploring Cycling Fidelity and Control of VR Bicycle Simulators
Andrii Matviienko (Technical University of Darmstadt, Darmstadt, Germany)Hajris Hoxha (Technical University of Darmstadt, Darmstadt, Germany)Max Mühlhäuser (TU Darmstadt, Darmstadt, Germany)
Creating highly realistic Virtual Reality (VR) bicycle experiences can be time-consuming and expensive. Moreover, it is unclear what hardware parts are necessary to design a bicycle simulator and whether a bicycle is needed at all. In this paper, we investigated cycling fidelity and control of VR bicycle simulators. For this, we developed and evaluated three cycling simulators: (1) cycling without a bicycle (bikeless), (2) cycling on a fixed (stationary) and (3) moving bicycle (tandem) with four levels of control (no control, steering, pedaling, and steering + pedaling). To evaluate all combinations of fidelity and control, we conducted a controlled experiment (N = 24) in indoor and outdoor settings. We found that the bikeless setup provides the highest feeling of safety, while the tandem leads to the highest realism without increasing motion sickness. Moreover, we discovered that bicycles are not essential for cycling in VR.
2
"We Speak Visually" : User-generated Icons for Better Video-Mediated Mixed Group Communications Between Deaf and Hearing Participants
Yeon Soo Kim (KAIST, Daejeon, Korea, Republic of)Hyeonjeong Im (Industrial Design, KAIST, Daejeon, Korea, Republic of)Sunok Lee (KAIST, Daejeon, Korea, Republic of)Haena Cho (Industrial Design, KAIST, Daejeon, Korea, Republic of)Sangsu Lee (Industrial Design, KAIST, Daejeon, Korea, Republic of)
Since the outbreak of the COVID-19 pandemic, videoconferencing technology has been widely adopted as a convenient, powerful, and fundamental tool that has simplified many day-to-day tasks. However, video communication is dependent on audible conversation and can be strenuous for those who are Hard of Hearing. Communication methods used by the Deaf and Hard of Hearing community differ significantly from those used by the hearing community, and a distinct language gap is evident in workspaces that accommodate workers from both groups. Therefore, we integrated users in both groups to explore ways to alleviate obstacles in mixed-group videoconferencing by implementing user-generated icons. A participatory design methodology was employed to investigate how the users overcome language differences. We observed that individuals utilized icons within video-mediated meetings as a universal language to reinforce comprehension. Herein, we present design implications from these findings, along with recommendations for future icon systems to enhance and support mixed-group conversations.