The ACM CHI Conference on Human Factors in Computing Systems (https://chi2023.acm.org/)

Birds of a Feather Video-Flock Together: Design and Evaluation of an Agency-Based Parrot-to-Parrot Video-Calling System for Interspecies Ethical Enrichment
Rebecca Kleinberger (Northeastern University, Boston, Massachusetts, United States)Jennifer Cunha (Parrot Kindergarten, Jupiter, Florida, United States)Megha M. Vemuri (Massachusetts Institute of Technology, Cambridge, Massachusetts, United States)Ilyena Hirskyj-Douglas (The University of Glasgow, Glasgow, United Kingdom)
Over 20 million parrots are kept as pets in the US, often lacking appropriate stimuli to meet their high social, cognitive, and emotional needs. After reviewing bird perception and agency literature, we developed an approach to allow parrots to engage in video-calling other parrots. Following a pilot experiment and expert survey, we ran a three-month study with 18 pet birds to evaluate the potential value and usability of a parrot-parrot video-calling system. We assessed the system in terms of perception, agency, engagement, and overall perceived benefits. With 147 bird-triggered calls, our results show that 1) every bird used the system, 2) most birds exhibited high motivation and intentionality, and 3) all caretakers reported perceived benefits, some arguably life-transformative, such as learning to forage or even to fly by watching others. We report on individual insights and propose considerations regarding ethics and the potential of parrot video-calling for enrichment.
“What if everyone is able to program?” – Exploring the Role of Software Development in Science Fiction
Kevin Krings (University of Siegen, Siegen, Germany)Nino S.. Bohn (University of Siegen, Siegen, Germany)Nora Anna Luise. Hille (University of Siegen, Siegen, Germany)Thomas Ludwig (University of Siegen, Siegen, Germany)
For decades, research around emerging technologies has been inspired by science fiction and vice versa. While so far almost only the technologies themselves have been considered, we explore the underlying software development and programming approaches. We therefore conduct a detailed media content analysis of twenty-seven movies that examines the role of software development in science fiction by identifying and investigating new approaches to programming and how software development is conceptualized portrayed within science fiction scenes. With the additional analysis of eighteen design fiction stories exploring the scenario “What if everyone is able to program?”, we envision potential impacts of the democratization of software development on business and society. Our study opens new discussions and perspectives, by investigating the current vision of the future of programming and uncovers new approaches to software development which can serve as a starting point for further research in the HCI community.
Horse as Teacher: How human-horse interaction informs human-robot interaction
Eakta Jain (University of Florida, Gainesville, Florida, United States)Christina Gardner-McCune (University of Florida , Gainesville, Florida, United States)
Robots are entering our lives and workplaces as companions and teammates. Though much research has been done on how to interact with robots, teach robots and improve task performance, an open frontier for HCI/HRI research is how to establish a working relationship with a robot in the first place. Studies that explore the early stages of human-robot interaction are an emerging area of research. Simultaneously, there is resurging interest in how human-animal interaction could inform human-robot interaction. We present a first examination of early stage human-horse interaction through the lens of human-robot interaction, thus connecting these two areas. Following Strauss’ approach, we conduct a thematic analysis of data from three sources gathered over a year of field work: observations, interviews and journal entries. We contribute design guidelines based on our analyses and findings.
CiteSee: Augmenting Citations in Scientific Papers with Persistent and Personalized Historical Context
Joseph Chee Chang (Allen Institute for AI, Seattle, Washington, United States)Amy X.. Zhang (University of Washington, Seattle, Washington, United States)Jonathan Bragg (Allen Institute for Artificial Intelligence, Seattle, Washington, United States)Andrew Head (University of Pennsylvania, Philadelphia, Pennsylvania, United States)Kyle Lo (Allen Institute for Artificial Intelligence, Seattle, Washington, United States)Doug Downey (Allen Institute for Artificial Intelligence, Seattle, Washington, United States)Daniel S. Weld (Allen Institute for Artificial Intelligence, Seattle, Washington, United States)
When reading a scholarly article, inline citations help researchers contextualize the current article and discover relevant prior work. However, it can be challenging to prioritize and make sense of the hundreds of citations encountered during literature reviews. This paper introduces CiteSee, a paper reading tool that leverages a user's publishing, reading, and saving activities to provide personalized visual augmentations and context around citations. First, CiteSee connects the current paper to familiar contexts by surfacing known citations a user had cited or opened. Second, CiteSee helps users prioritize their exploration by highlighting relevant but unknown citations based on saving and reading history. We conducted a lab study that suggests CiteSee is significantly more effective for paper discovery than three baselines. A field deployment study shows CiteSee helps participants keep track of their explorations and leads to better situational awareness and increased paper discovery via inline citation when conducting real-world literature reviews.
Short-Form Videos Degrade Our Capacity to Retain Intentions: Effect of Context Switching On Prospective Memory
Francesco Chiossi (LMU Munich, Munich, Germany)Luke Haliburton (LMU Munich, Munich, Germany)Changkun Ou (LMU Munich, Munich, Germany)Andreas Martin. Butz (LMU Munich, Munich, Germany)Albrecht Schmidt (LMU Munich, Munich, Germany)
Social media platforms use short, highly engaging videos to catch users' attention. While the short-form video feeds popularized by TikTok are rapidly spreading to other platforms, we do not yet understand their impact on cognitive functions. We conducted a between-subjects experiment (N=60) investigating the impact of engaging with TikTok, Twitter, and YouTube while performing a Prospective Memory task (i.e., executing a previously planned action). The study required participants to remember intentions over interruptions. We found that the TikTok condition significantly degraded the users’ performance in this task. As none of the other conditions (Twitter, YouTube, no activity) had a similar effect, our results indicate that the combination of short videos and rapid context-switching impairs intention recall and execution. We contribute a quantified understanding of the effect of social media feed format on Prospective Memory and outline consequences for media technology designers to not harm the users’ memory and wellbeing.
Surface I/O: Creating Devices with Functional Surface Geometry for Haptics and User Input
Yuran Ding (Carnegie Mellon University, Pittsburgh, Pennsylvania, United States)Craig Shultz (Carnegie Mellon University, Pittsburgh, Pennsylvania, United States)Chris Harrison (Carnegie Mellon University, Pittsburgh, Pennsylvania, United States)
Surface I/O is a novel interface approach that functionalizes the exterior surface of devices to provide haptic and touch sensing without dedicated mechanical components. Achieving this requires a unique combination of surface features spanning the macro-scale (5cm~1mm), meso-scale (1mm~200μm), and micro-scale (<200μm). This approach simplifies interface creation, allowing designers to iterate on form geometry, haptic feeling, and sensing functionality without the limitations of mechanical mechanisms. We believe this can contribute to the concept of "invisible ubiquitous interactivity at scale", where the simplicity and easy implementation of the technique allows it to blend with objects around us. While we prototyped our designs using 3D printers and laser cutters, our technique is applicable to mass production methods, including injection molding and stamping, enabling passive goods with new levels of interactivity.
BubbleTex: Designing Heterogenous Wettable Areas for Carbonation Bubble Patterns on Surfaces
Harpreet Sareen (The University of Tokyo, Tokyo, Japan)Yibo Fu (The New School, New York, New York, United States)Nour Boulahcen (Telecom ParisTech, Paris, France)Yasuaki Kakehi (The University of Tokyo, Tokyo, Japan)
Materials are a key part of our daily experiences. Recently, researchers have been devising new ways to utilize materials directly from our physical world for the design of objects and interactions. We present a new fabrication technique that enables control of CO2 bubble positions and their size within carbonated liquids. Instead of soap bubbles, boiling water, or droplets, we show creation of patterns, images and text through sessile bubbles that exhibit a lifetime of several days. Surfaces with mixed wettability regions are created on glass and plastic using ceramic coatings or plasma projection leading to patterns that are relatively invisible to the human eye. Different regions react to liquids differently. Nucleation is activated after carbonated liquid is poured onto the surface with bubbles nucleating in hydrophobic regions with a strong adherence to the surface and can be controlled in size ranging from 0.5mm – 6.5mm. Bubbles go from initially popping or becoming buoyant during CO2 supersaturation to stabilizing at their positions within minutes. Technical evaluation shows stabilization under various conditions. Our design software allows users to import images and convert them into parametric pixelation forms conducive to fabrication that will result in nucleation of bubbles at required positions. Various applications are presented to demonstrate aspects that may be harnessed for a wide range of use in daily life. Through this work, we enable the use of carbonation bubbles as a new design material for designers and researchers.
Full-hand Electro-Tactile Feedback without Obstructing Palmar Side of Hand
Yudai Tanaka (University of Chicago, Chicago, Illinois, United States)Alan Shen (University of Chicago, Chicago, Illinois, United States)Andy Kong (University of Chicago, Chicago, Illinois, United States)Pedro Lopes (University of Chicago, Chicago, Illinois, United States)
We present a technique to render tactile feedback to the palmar side of the hand while keeping it unobstructed and, thus, preserving manual dexterity during interactions with physical objects. We implement this by applying electro-tactile stimulation only to the back of the hand and to the wrist. In our approach, there are no electrodes on the palmar side, yet that is where tactile sensations are felt. While we place electrodes outside the user’s palm, we do so in strategic locations that conduct the electrical currents to the median/ulnar nerves, causing tactile sensations on the palmar side of the hand. In our user studies, we demonstrated that our approach renders tactile sensations to 11 different locations on the palmar side while keeping users’ palms free for dexterous manipulations. Our approach enables new applications such as tactile notifications during dexterous activities or VR experiences that rely heavily on physical props.
Affective Profile Pictures: Exploring the Effects of Changing Facial Expressions in Profile Pictures on Text-Based Communication
Chi-Lan Yang (The University of Tokyo, Tokyo, Japan)Shigeo Yoshida (The University of Tokyo, Tokyo, Japan)Hideaki Kuzuoka (The University of Tokyo, Bunkyo-ku, Tokyo, Japan)Takuji Narumi (the University of Tokyo, Tokyo, Japan)Naomi Yamashita (NTT, Keihanna, Japan)
When receiving text messages from unacquainted colleagues in fully remote workplaces, insufficient mutual understanding and limited social cues can lead people to misinterpret the tone of the message and further influence their impression of remote colleagues. Emojis have been commonly used for supporting expressive communication; however, people seldom use emojis before they become acquainted with each other. Hence, we explored how changing facial expressions in profile pictures could be an alternative channel to communicate socio-emotional cues. By conducting an online controlled experiment with 186 participants, we established that changing facial expressions of profile pictures can influence the impression of the message receivers toward the sender and the message valence when receiving neutral messages. Furthermore, presenting incongruent profile pictures to positive messages negatively affected the interpretation of the message valence, but did not have much effect on negative messages. We discuss the implications of affective profile pictures in supporting text-based communication.
Quantified Canine: Inferring Dog Personality From Wearables
Lakmal Meegahapola (Idiap Research Institute, Martigny, Switzerland)Marios Constantinides (Nokia Bell Labs, Cambridge, United Kingdom)Zoran Radivojevic (Nokia Bell Labs, Cambridge, United Kingdom)Hongwei Li (Nokia Bell Labs, Cambridge, United Kingdom)Daniele Quercia (Nokia Bell Labs, Cambridge, United Kingdom)Michael S. Eggleston (Nokia Bell Labs, Murray Hill, New Jersey, United States)
Being able to assess dog personality can be used to, for example, match shelter dogs with future owners, and personalize dog activities. Such an assessment typically relies on experts or psychological scales administered to dog owners, both of which are costly. To tackle that challenge, we built a device called ``Patchkeeper'' that can be strapped on the pet's chest and measures activity through an accelerometer and a gyroscope. In an in-the-wild deployment involving 12 healthy dogs, we collected 1300 hours of sensor activity data and dog personality test results from two validated questionnaires. By matching these two datasets, we trained ten machine learning classifiers that predicted dog personality from activity data, achieving AUCs in [0.63-0.90], suggesting the value of tracking psychological signals of pets using wearable technologies.
What is Human-Centered about Human-Centered AI? A Map of the Research Landscape
Tara Capel (Queensland University of Technology, Brisbane, QLD, Australia)Margot Brereton (QUT, Brisbane, Brisbane, Australia)
The application of Artificial Intelligence (AI) across a wide range of domains comes with both high expectations of its benefits and dire predictions of misuse. While AI systems have largely been driven by a technology-centered design approach, the potential societal consequences of AI have mobilized both HCI and AI researchers towards researching human-centered artificial intelligence (HCAI). However, there remains considerable ambiguity about what it means to frame, design and evaluate HCAI. This paper presents a critical review of the large corpus of peer-reviewed literature emerging on HCAI in order to characterize what the community is defining as HCAI. Our review contributes an overview and map of HCAI research based on work that explicitly mentions the terms ‘human-centered artificial intelligence’ or ‘human-centered machine learning’ or their variations, and suggests future challenges and research directions. The map reveals the breadth of research happening in HCAI, established clusters and the emerging areas of Interaction with AI and Ethical AI. The paper contributes a new definition of HCAI, and calls for greater collaboration between AI and HCI research, and new HCAI constructs.
Lyric App Framework: A Web-based Framework for Developing Interactive Lyric-driven Musical Applications
Jun Kato (National Institute of Advanced Industrial Science and Technology (AIST), Tsukuba, Japan)Masataka Goto (National Institute of Advanced Industrial Science and Technology (AIST), Tsukuba, Japan)
Lyric videos have become a popular medium to convey lyrical content to listeners, but they present the same content whenever they are played and cannot adapt to listeners' preferences. Lyric apps, as we name them, are a new form of lyric-driven visual art that can render different lyrical content depending on user interaction and address the limitations of static media. To open up this novel design space for programmers and musicians, we present Lyric App Framework, a web-based framework for building interactive graphical applications that play musical pieces and show lyrics synchronized with playback. We designed the framework to provide a streamlined development experience for building production-ready lyric apps with creative coding libraries of choice. We held programming contests twice and collected 52 examples of lyric apps, enabling us to reveal eight representative categories, confirm the framework's effectiveness, and report lessons learned.
"What It Wants Me To Say": Bridging the Abstraction Gap Between End-User Programmers and Code-Generating Large Language Models
Michael Xieyang Liu (Microsoft Research, Cambridge, United Kingdom)Advait Sarkar (Microsoft Research, Cambridge, United Kingdom)Carina Negreanu (Microsoft Research , Cambridge, Cambridgeshire, United Kingdom)Benjamin Zorn (Microsoft Research, Redmond, Washington, United States)Jack Williams (Microsoft Research, Cambridge, United Kingdom)Neil Toronto (Microsoft Research, Cambridge, United Kingdom)Andrew D Gordon (Microsoft Research, Redmond, Washington, United States)
Code-generating large language models map natural language to code. However, only a small portion of the infinite space of naturalistic utterances is effective at guiding code generation. For non-expert end-user programmers, learning this is the challenge of abstraction matching. We examine this challenge in the specific context of data analysis in spreadsheets, in a system that maps the user's natural language query to Python code using the Codex generator, executes the code, and shows the result. We propose grounded abstraction matching, which bridges the abstraction gap by translating the code back into a systematic and predictable naturalistic utterance. In a between-subjects, think-aloud study (n=24), we compare grounded abstraction matching to an ungrounded alternative based on previously established query framing principles. We find that the grounded approach improves end-users' understanding of the scope and capabilities of the code-generating model, and the kind of language needed to use it effectively.
XAIR: A Framework of Explainable AI in Augmented Reality
Xuhai Xu (Reality Labs Research, Redmond, Washington, United States)Anna Yu (Reality Labs Research, Redmond, Washington, United States)Tanya R.. Jonker (Facebook Reality Labs: Research, Redmond, Washington, United States)Kashyap Todi (Reality Labs Research, Redmond, Washington, United States)Feiyu Lu (Reality Labs Research, Redmond, Washington, United States)Xun Qian (Reality Labs Research, Redmond, Washington, United States)João Marcelo. Evangelista Belo (Reality Lab Research, Redmond, Washington, United States)Tianyi Wang (Reality Labs Research, Redmond, Washington, United States)Michelle Li (Reality Labs Research, Redmond, Washington, United States)Aran Mun (Reality Labs Research, Redmond, Washington, United States)Te-Yen Wu (Reality Labs Research, Redmond, Washington, United States)Junxiao Shen (Reality Labs Research, Redmond, Washington, United States)Ting Zhang (Meta Inc., Redmond, Washington, United States)Narine Kokhlikyan (Facebook, Menlo Park, California, United States)Fulton Wang (Reality Labs Research, Redmond, Washington, United States)Paul Sorenson (Reality Labs Research, Redmond, Washington, United States)Sophie Kim (Facebook Reality Labs, Redmond, Washington, United States)Hrvoje Benko (Meta, Redmond, Washington, United States)
Explainable AI (XAI) has established itself as an important component of AI-driven interactive systems. With Augmented Reality (AR) becoming more integrated in daily lives, the role of XAI also becomes essential in AR because end-users will frequently interact with intelligent services. However, it is unclear how to design effective XAI experiences for AR. We propose XAIR, a design framework that addresses when, what, and how to provide explanations of AI output in AR. The framework was based on a multi-disciplinary literature review of XAI and HCI research, a large-scale survey probing 500+ end-users’ preferences for AR-based explanations, and three workshops with 12 experts collecting their insights about XAI design in AR. XAIR's utility and effectiveness was verified via a study with 10 designers and another study with 12 end-users. XAIR can provide guidelines for designers, inspiring them to identify new design opportunities and achieve effective XAI designs in AR.
Bias-Aware Systems: Exploring Indicators for the Occurrences of Cognitive Biases when Facing Different Opinions
Nattapat Boonprakong (University of Melbourne, Parkville, Victoria, Australia)Xiuge Chen (The University of Melbourne, Melbourne, Victoria, Australia)Catherine Davey (University of Melbourne, Parkville, Victoria, Australia)Benjamin Tag (University of Melbourne, Melbourne, Victoria, Australia)Tilman Dingler (University of Melbourne, Melbourne, Victoria, Australia)
Cognitive biases have been shown to play a critical role in creating echo chambers and spreading misinformation. They undermine our ability to evaluate information and can influence our behaviour without our awareness. To allow the study of occurrences and effects of biases on information consumption behaviour, we explore indicators for cognitive biases in physiological and interaction data. Therefore, we conducted two experiments investigating how people experience statements that are congruent or divergent from their own ideological stance. We collected interaction data, eye tracking data, hemodynamic responses, and electrodermal activity while participants were exposed to ideologically tainted statements. Our results indicate that people spend more time processing statements that are incongruent with their own opinion. We detected differences in blood oxygenation levels between congruent and divergent opinions, a first step towards building systems to detect and quantify cognitive biases.
Here and Now: Creating Improvisational Dance Movements with a Mixed Reality Mirror
Qiushi Zhou (University of Melbourne, Melbourne, Victoria, Australia)Louise Grebel (Université Paris-Saclay, Orsay, France)Andrew Irlitti (University of Melbourne, Melbourne, Australia)Julie Ann Minaai (The University of Melbourne, Southbank, Australia)Jorge Goncalves (University of Melbourne, Melbourne, Australia)Eduardo Velloso (University of Melbourne, Melbourne, Victoria, Australia)
This paper explores using mixed reality (MR) mirrors for supporting improvisational dance making. Motivated by the prevalence of mirrors in dance studios and inspired by Forsythe’s Improvisation Technologies, we conducted workshops with 13 dancers and choreographers to inform the design of future MR visualisation and annotation tools for dance. The workshops involved using a prototype MR mirror as a technology probe that reveals the spatial and temporal relationships between the reflected dancing body and its surroundings during improvisation; speed dating group interviews around future design ideas; follow-up surveys and extended interviews with a digital media dance artist and a dance educator. Our findings highlight how the MR mirror enriches dancers' temporal and spatial perception, creates multi-layered presence, and affords appropriation by dancers. We also discuss the unique place of MR mirrors in the theoretical context of dance and in the history of movement visualisation, and distil lessons for broader HCI research.
Smooth as - The Effects of Frame Rate Variation on Game Player Quality of Experience
Shengmei Liu (Worcester Polytechnic Institute, Worcester, Massachusetts, United States)Atsuo Kuwahara (Intel Corporation, Hillsboro, Oregon, United States)James J. Scovell (Intel Corporation, Hillsboro, Oregon, United States)Mark Claypool (WPI, Worcester, Massachusetts, United States)
For gamers, high frame rates are important for a smooth visual display and good quality of experience (QoE). However, high frame rates alone are not enough as variations in the frame display times can degrade QoE even as the average frame rate remains high. While the impact of steady frame rates on player QoE is fairly well-studied, the effects of frame rate variation is not. This paper presents a 33-person user study that evaluates the impact of frame rate variation on users playing three different computer games. Analysis of the results shows average frame rate alone is a poor predictor of QoE, and frame rate variation has a significant impact on player QoE. While the standard deviation of frame times is promising as a general predictor for QoE, frame time standard deviation may not be accurate for all individual games. However, 95% frame rate floor -– the bottom 5% of frame rates the player experiences –- appears to be an effective predictor of both QoE overall and for the individual games tested.
Engaging Passers-by with Rhythm: Applying Feedforward Learning to a Xylophonic Media Architecture Facade
Alex Binh Vinh Duc Nguyen (KU Leuven, Leuven, Belgium)Jihae Han (KU Leuven, Leuven, Belgium)Maarten Houben (Eindhoven University of Technology, Eindhoven, Netherlands)Yssmin Bayoumi (KU Leuven, Leuven, Belgium)Andrew Vande Moere (KU Leuven, Leuven, Belgium)
Media architecture exploits interactive technology to encourage passers-by to engage with an architectural environment. Whereas most media architecture installations focus on visual stimulation, we developed a permanent media facade that rhythmically knocks xylophone blocks embedded beneath 11 window sills, according to the human actions constantly traced via an overhead camera. In an attempt to overcome its apparent limitations in engaging passers-by more enduringly and purposefully, our study investigates the impact of feedforward learning, a constructive interaction method that instructs passers-by about the results of their actions. Based on a comparative (n=25) and a one-month in-the-wild (n=1877) study, we propose how feedforward learning could empower passers-by to understand the interaction of more abstract types of media architecture, and how particular quantitative indicators capturing this learning could predict how enduringly and purposefully a passer-might engage. We believe these contributions could inspire more creative integrations of non-visual modalities in future public interactive interventions.
How to Communicate Robot Motion Intent: A Scoping Review
Max Pascher (Westphalian University of Applied Sciences, Gelsenkirchen, NRW, Germany)Uwe Gruenefeld (University of Duisburg-Essen, Essen, Germany)Stefan Schneegass (University of Duisburg-Essen, Essen, Germany)Jens Gerken (Westphalian University of Applied Sciences, Gelsenkirchen, Germany)
Robots are becoming increasingly omnipresent in our daily lives, supporting us and carrying out autonomous tasks. In Human-Robot Interaction, human actors benefit from understanding the robot's motion intent to avoid task failures and foster collaboration. Finding effective ways to communicate this intent to users has recently received increased research interest. However, no common language has been established to systematize robot motion intent. This work presents a scoping review aimed at unifying existing knowledge. Based on our analysis, we present an intent communication model that depicts the relationship between robot and human through different intent dimensions (intent type, intent information, intent location). We discuss these different intent dimensions and their interrelationships with different kinds of robots and human roles. Throughout our analysis, we classify the existing research literature along our intent communication model, allowing us to identify key patterns and possible directions for future research.
Breaking Out of the Ivory Tower: A Large-scale Analysis of Patent Citations to HCI Research
Hancheng Cao (Stanford University, Stanford, California, United States)Yujie Lu (NLP, Santa Barbara, California, United States)Yuting Deng (Carnegie Mellon University, Pittsburgh, Pennsylvania, United States)Daniel McFarland (Stanford University, Stanford, California, United States)Michael S.. Bernstein (Stanford University, Stanford, California, United States)
What is the impact of human-computer interaction research on industry? While it is impossible to track all research impact pathways, the growing literature on translational research impact measurement offers patent citations as one measure of how industry recognizes and draws on research in its inventions. In this paper, we perform a large-scale measurement study primarily of 70,000 patent citations to premier HCI research venues, tracing how HCI research are cited in United States patents over the last 30 years. We observe that 20.1% of papers from these venues, including 60--80% of papers at UIST and 13% of papers in a broader dataset of SIGCHI-sponsored venues overall, are cited by patents -- far greater than premier venues in science overall (9.7%) and NLP (11%). However, the time lag between a patent and its paper citations is long (10.5 years) and getting longer, suggesting that HCI research and practice may not be efficiently connected.
QButterfly: Lightweight Survey Extension for Online User-Interaction Studies for Non-Tech-Savvy Researchers
Nico Ebert (ZHAW School of Management and Law, Winterthur, Zurich, Switzerland)Björn Scheppler (ZHAW School of Management and Law, Winterthur, Switzerland)Kurt Alexander. Ackermann (ZHAW School of Management and Law, Winterthur, Zurich, Switzerland)Tim Geppert (Institut für Wirtschaftsinformatik, Winterthur, Switzerland)
We provide a user-friendly, flexible, and lightweight open-source HCI toolkit (github.com/QButterfly) that allows non-tech-savvy researchers to conduct online user interaction studies using the widespread Qualtrics and LimeSurvey platforms. These platforms already provide rich functionality (e.g., for experiments or usability tests) and therefore lend themselves to an extension to display stimulus web pages and record clickstreams. The toolkit consists of a survey template with embedded JavaScript, a JavaScript library embedded in the HTML web pages, and scripts to analyze the collected data. No special programming skills are required to set up a study or match survey data and user interaction data after data collection. We empirically validated the software in a laboratory and a field study. We conclude that this extension, even in its preliminary version, has the potential to make online user interaction studies (e.g., with crowdsourced participants) accessible to a broader range of researchers.
WebUI: A Dataset for Enhancing Visual UI Understanding with Web Semantics
Jason Wu (Carnegie Mellon University, Pittsburgh, Pennsylvania, United States)Siyan Wang (Wellesley College, Wellesley, Massachusetts, United States)Siman Shen (Grinnell College, Grinnell, Iowa, United States)Yi-Hao Peng (Carnegie Mellon University, Pittsburgh, Pennsylvania, United States)Jeffrey Nichols (Snooty Bird LLC, San Diego, California, United States)Jeffrey P. Bigham (Carnegie Mellon University, Pittsburgh, Pennsylvania, United States)
Modeling user interfaces (UIs) from visual information allows systems to make inferences about the functionality and semantics needed to support use cases in accessibility, app automation, and testing. Current datasets for training machine learning models are limited in size due to the costly and time-consuming process of manually collecting and annotating UIs. We crawled the web to construct WebUI, a large dataset of 400,000 rendered web pages associated with automatically extracted metadata. We analyze the composition of WebUI and show that while automatically extracted data is noisy, most examples meet basic criteria for visual UI modeling. We applied several strategies for incorporating semantics found in web pages to increase the performance of visual UI understanding models in the mobile domain, where less labeled data is available: (i) element detection, (ii) screen classification and (iii) screen similarity.
Co-Designing with Early Adolescents: Understanding Perceptions of and Design Considerations for Tech-Based Mediation Strategies that Promote Technology Disengagement
Ananta Chowdhury (University of Manitoba, Winnipeg, Manitoba, Canada)Andrea Bunt (University of Manitoba, Winnipeg, Manitoba, Canada)
Children’s excessive use of technology is a growing concern, and despite taking various measures, parents often find it difficult to limit their children’s device use. Limiting tech usage can be especially challenging with early adolescents as they start to develop a sense of autonomy. While numerous tech-based mediation solutions exist, in this paper, we aim to learn from early adolescents directly by having them contribute to co-design activities. Through a multi-session, group-based, online co-design study with 21 early adolescents (ages 11-14), we explore their perceptions towards tech overuse and what types of solutions they propose to help with disengagement. Findings from these co-design sessions contribute insights into how the participants conceptualized the problem of tech overuse, how they envisioned appropriate mediation strategies, and important design considerations. We also reflect on our study methods, which encouraged active participation from our participants and facilitated valuable contributions during the online co-design sessions.
ChallengeDetect: Investigating the Potential of Detecting In-Game Challenge Experience from Physiological Measures
Xiaolan Peng (Institute of software,Chinese Academy of Sciences, Beijing, -Select-, China)Xurong Xie (Institute of Software, Chinese Academy of Science, Beijing, China)Jin Huang (Chinese Academy of Sciences, Beijing, China)Chutian Jiang (Computational Media and Arts Thrust, Guangzhou, China)Haonian Wang (Department of Artificial Intelligence, Beijing, China)Alena Denisova (University of York, York, United Kingdom)Hui Chen (Institute of Software, Chinese Academy of Sciences, Beijing, China)Feng Tian (Institute of software, Chinese Academy of Sciences, Beijing, China)Hongan Wang (Institute of Software, Chinese Academy of Sciences, Beijing, China)
Challenge is the core element of digital games. The wide spectrum of physical, cognitive, and emotional challenge experiences provided by modern digital games can be evaluated subjectively using a questionnaire, the CORGIS, which allows for a post hoc evaluation of the overall experience that occurred during game play. Measuring this experience dynamically and objectively, however, would allow for a more holistic view of the moment-to-moment experiences of players. This study, therefore, explored the potential of detecting perceived challenge from physiological signals. For this, we collected physiological responses from 32 players who engaged in three typical game scenarios. Using perceived challenge ratings from players and extracted physiological features, we applied multiple machine learning methods and metrics to detect challenge experiences. Results show that most methods achieved a detection accuracy of around 80%. We discuss in-game challenge perception, challenge-related physiological indicators and AI-supported challenge detection to inform future work on challenge evaluation.
Thermotion: Design and fabrication of thermofluidic composites for animation effects on object surfaces
Tianyu Yu (Tsinghua University, Beijing, China)Weiye Xu (Tsinghua University, Beijing, China)Haiqing Xu (Tsinghua University, beijing, -Select-, China)Guanhong Liu (Tsinghua University, Beijing, China)Chang Liu (Tsinghua University, BeiJing, China)Guanyun Wang (Zhejiang University, Hangzhou, China)Haipeng Mi (Tsinghua University, Beijing, China)
We introduce Thermotion, a novel method using thermofluidic composites to design and display thermochromic animation effects on object surfaces. With fluidic channels embedded under the object surfaces, the composites utilize thermofluidic flows to dynamically control the surface temperature as an actuator for thermochromic paints, which enables researchers and designers for the first time to create animations not only on two and three-dimensional surfaces but also on the surface made of a few flexible everyday materials. We report the design space with six animation primitives and two modification effects, and we demonstrate the design and fabrication workflow with a customized software platform for design and simulation. A range of applications is shown leveraging the objects' dynamic displays both visually and thermally, including dynamic artifacts, teaching aids, and ambient displays. We envision an opportunity to extend thermofluidic composites to other heat-related practices for further dynamic and programmable interactions with temperature.
Fingerhints: Understanding Users' Perceptions of and Preferences for On-Finger Kinesthetic Notifications
Adrian-Vasile Catană (Ștefan cel Mare University of Suceava, Suceava, Romania)Radu-Daniel Vatavu (Ștefan cel Mare University of Suceava, Suceava, Romania)
We present "fingerhints," on-finger kinesthetic feedback represented by hyper-extension movements of the index finger, bypassing user agency, for notifications delivery. To this end, we designed a custom-made finger-augmentation device, which leverages mechanical force to deliver fingerhints as programmable hyper-extensions of the index finger. We evaluate fingerhints with 21 participants, and report good usability, low technology creepiness, and moderate to high social acceptability. In a second study with 11 new participants, we evaluate the wearable comfort of our fingerhints device against four commercial finger- and hand-augmentation devices. Finally, we present insights from the experience of one participant, who wore our device for eight hours during their daily life. We discuss the user experience of fingerhints in relation to our participants' personality traits, finger dexterity levels, and general attitudes toward notifications, and present implications for interactive systems leveraging on-finger kinesthetic feedback for on-body computing.
No Pie in the (Digital) Sky: Co-Imagining the Food Metaverse
Alexandra Covaci (University of Kent, Canterbury, Kent, United Kingdom)Khawla Alhasan (University of Kent, Canterbury, Kent, United Kingdom)Mayank Loonker (University of Kent, Canterbury, United Kingdom)Bernardine Farrell (University of Kent, Canterbury, United Kingdom)Luma Tabbaa (University of Kent, Canterbury, Kent, United Kingdom)Sophia Ppali (University of Kent, Canterbury, United Kingdom)Chee Siang Ang (University of Kent, Canterbury, KENT, United Kingdom)
Human behaviour and habits co-evolve with technology, and the metaverse is poised to become a key player in reshaping how we live our everyday life. Given the importance of food in our daily lives, we ask: how will our relationships with food be transformed by the metaverse, and what are the promises and pitfalls of this technology? To answer this, we propose a co-design study that reveals the important elements people value in their daily interactions with food. We then present a speculative catalogue of novel metaverse food experiences, and insights from discussing these ideas with food designers, anthropologists and metaverse experts. Our work aims to provide designers with inspirations for building a metaverse that: provides inclusive opportunities for the future of food; helps re-discover the forgotten or lost knowledge about food; facilitates the exploration, excitement and joy of eating; and reinvigorates the ways that food can soothe and heal.
AnisoTag: 3D Printed Tag on 2D Surface via Reflection Anisotropy
Zehua Ma (University of Science and Technology of China, Hefei, China)Hang Zhou (Simon Fraser University, Burnaby, British Columbia, Canada)Weiming Zhang (University of Science and Technology of China, Hefei, China)
In the past few years, the widespread use of 3D printing technology enables the growth of the market of 3D printed products. On Esty, a website focused on handmade items, hundreds of individual entrepreneurs are selling their 3D printed products. Inspired by the positive effects of machine-readable tags, like barcodes, on daily product marketing, we propose AnisoTag, a novel tagging method to encode data on the 2D surface of 3D printed objects based on reflection anisotropy. AnisoTag has an unobtrusive appearance and much lower extraction computational complexity, contributing to a lightweight low-cost tagging system for individual entrepreneurs. On AnisoTag, data are encoded by the proposed tool as reflective anisotropic microstructures, which would reflect distinct illumination patterns when irradiating by collimated laser. Based on it, we implement a real-time detection prototype with inexpensive hardware to determine the reflected illumination pattern and decode data according to their mapping. We evaluate AnisoTag with various 3D printer brands, filaments, and printing parameters, demonstrating its superior usability, accessibility, and reliability for practical usage.
“I normally wouldn't talk with strangers”: Introducing a Socio-Spatial Interface for Fostering Togetherness Between Strangers
Ge Guo (Cornell University, Ithaca, New York, United States)Gilly Leshed (Cornell University, Ithaca, New York, United States)Keith Evan. Green (Cornell University, Ithaca, New York, United States)
Interacting with strangers can be beneficial but also challenging. Fortunately, these challenges can lead to design opportunities. In this paper, we present the design and evaluation of a socio-spatial interface, SocialStools, that leverages the human propensity for embodied interaction to foster togetherness between strangers. SocialStools is an installation of three responsive stools on caster wheels that generate sound and imagery in the near environment as three strangers sit on them, move them, and rotate them relative to each other. In our study with 12 groups of three strangers, we found a sense of togetherness emerged through interaction, evidenced by different patterns of socio-spatial movements, verbal communication, non-verbal behavior, and interview responses. We present our findings, articulate reasons for the cultivation of togetherness, consider the unique social affordances of our spatial interface in shifting attention during interpersonal communication, and provide design implications. This research contributes insights toward designing cyber-physical interfaces that foster interaction and togetherness among strangers at a time when cultivating togetherness is especially critical.
Log-it: Supporting Programming with Interactive, Contextual, Structured, and Visual Logs
Peiling Jiang (University of California San Diego, San Diego, California, United States)Fuling Sun (University of California San Diego, San Diego, California, United States)Haijun Xia (University of California, San Diego, San Diego, California, United States)
Logging is a widely used technique for inspecting and understanding programs. However, the presentation of logs still often takes its ancient form of a linear stream of text that resides in a terminal, console, or log file. Despite its simplicity, interpreting log output is often challenging due to the large number of textual logs that lack structure and context. We conducted content analysis and expert interviews to understand the practices and challenges inherent in logging. These activities demonstrated that the current representation of logs does not provide the rich structures programmers need to interpret them or the program's behavior. We present Log-it, a logging interface that enables programmers to interactively structure and visualize logs in situ. A user study with novices and experts showed that Log-it's syntax and interface have a minimal learning curve, and the interactive representations and organizations of logs help programmers easily locate, synthesize, and understand logs.
Personalised Yet Impersonal: Listeners' Experiences Of Algorithmic Curation On Music Streaming Services
Sophie Freeman (The University of Melbourne, Melbourne, Australia)Martin Gibbs (The University of Melbourne, Melbourne, Victoria, Australia)Bjorn Nansen (University of Melbourne, Melbourne, Australia)
The consumption of music is increasingly reliant on the personalisation, recommendation, and automated curation features of music streaming services. Using algorithm experience (AX) as a lens, we investigated the user experience of the algorithmic recommendation and automated curation features of several popular music streaming services. We conducted interviews and participant-observation with 15 daily users of music streaming services, followed by a design workshop. We found that despite the utility of increasingly algorithmic personalisation, listeners experienced these algorithmic and recommendation features as impersonal in determining their background listening, music discovery, and playlist curation. While listener desire for more control over recommendation settings is not new, we offer a number of novel insights about music listening to nuance this understanding, particularly through the notion of vibe.
Speech-Augmented Cone-of-Vision for Exploratory Data Analysis
Riccardo Bovo (Imperial College London, London, United Kingdom)Daniele Giunchi (University College London, London, United Kingdom)Ludwig Sidenmark (Lancaster University, Lancaster, United Kingdom)Joshua Newn (Lancaster University, Lancaster, Lancashire, United Kingdom)Hans Gellersen (Aarhus University, Aarhus, Denmark)Enrico Costanza (UCL Interaction Centre, London, United Kingdom)Thomas Heinis (Imperial College, London, United Kingdom)
Mutual awareness of visual attention is crucial for successful collaboration. Previous research has explored various ways to represent visual attention, such as field-of-view visualizations and cursor visualizations based on eye-tracking, but these methods have limitations. Verbal communication is often utilized as a complementary strategy to overcome such disadvantages. This paper proposes a novel method that combines verbal communication with the Cone of Vision to improve gaze inference and mutual awareness in VR. We conducted a within-group study with pairs of participants who performed a collaborative analysis of data visualizations in VR. We found that our proposed method provides a better approximation of eye gaze than the approximation provided by head direction. Furthermore, we release the first collaborative head, eyes, and verbal behaviour dataset. The results of this study provide a foundation for investigating the potential of verbal communication as a tool for enhancing visual cues for joint attention.
How Instructional Data Physicalization Fosters Reflection in Personal Informatics
Marit Bentvelzen (Utrecht University, Utrecht, Netherlands)Julia Dominiak (Lodz University of Technology, Łódź, Poland)Jasmin Niess (University of St. Gallen, St. Gallen, Switzerland)Frederique Henraat (Utrecht University, Utrecht, Netherlands)Paweł W. Woźniak (Chalmers University of Technology, Gothenburg, Sweden)
The ever-increasing number of devices quantifying our lives offers a perspective of high awareness of one's wellbeing, yet it remains a challenge for personal informatics (PI) to effectively support data-based reflection. Effective reflection is recognised as a key factor for PI technologies to foster wellbeing. Here, we investigate whether building tangible representations of health data can offer engaging and reflective experiences. We conducted a between-subjects study where n=60 participants explored their immediate blood pressure data in relation to medical norms. They either used a standard mobile app, built a data representation from LEGO bricks based on instructions, or completed a free-form brick build. We found that building with instructions fostered more comparison and using bricks fostered focused attention. The free-form condition required extra time to complete, and lacked usability. Our work shows that designing instructional physicalisation experiences for PI is a means of improving engagement and understanding of personal data.
Exploring Memory-Oriented Interactions with Digital Photos In and Across Time: A Field Study of Chronoscope
Amy Yo Sue Chen (Simon Fraser University, Surrey, British Columbia, Canada)William Odom (Simon Fraser University, Surrey, British Columbia, Canada)Carman Neustaedter (Simon Fraser University, Surrey, British Columbia, Canada)Ce Zhong (Simon Fraser University, Surrey, British Columbia, Canada)Henry Lin (Simon Fraser University, Surrey, British Columbia, Canada)
We describe a field study of Chronoscope, a tangible photo viewer that lets people revisit and explore their digital photos with the support of temporal metadata. Chronoscope offers different temporal modalities for organizing one’s personal digital photo archive, and for exploring possible connections in and across time, and among photos and memories. We deployed four Chronoscopes in four households for three months to understand participants’ experiences over time. Our goals are to investigate the reflective potential of temporal modalities as an alternative design approach for supporting memory-oriented photo exploration, and empirically explore conceptual propositions related to slow technology. Findings revealed that Chronoscope catalyzed a range of reflective experiences on their respective life histories and life stories. It opened up alternative ways of considering time and the potential longevity of personal photo archives. We conclude with implications to present opportunities for future HCI research and practice.
TactIcons: Designing 3D Printed Map Icons for People who are Blind or have Low Vision
Leona M. Holloway (Monash University, Melbourne, VIC, Australia)Matthew Butler (Monash University, Melbourne, Australia)Kim Marriott (Monash University, Melbourne, Australia)
Visual icons provide immediate recognition of features on print maps but do not translate well for touch reading by people who are blind or have low vision due to the low fidelity of tactile perception. We explored 3D printed icons as an equivalent to visual icons for tactile maps addressing these problems. We designed over 200 tactile icons (TactIcons) for street and park maps. These were touch tested by blind and sighted people, resulting in a corpus of 33 icons that can be recognised instantly and a further 34 icons that are easily learned. Importantly, this work has informed the creation of detailed guidelines for the design of TactIcons and a practical methodology for touch testing new TactIcons. It is hoped that this work will contribute to the creation of more inclusive, user-friendly tactile maps for people who are blind or have low vision.
Drifting Off in Paradise: Why People Sleep in Virtual Reality
Michael Yin (University of British Columbia, Vancouver, British Columbia, Canada)Robert Xiao (University of British Columbia, Vancouver, British Columbia, Canada)
Sleep is important for humans, and past research has considered methods of improving sleep through technologies such as virtual reality (VR). However, there has been limited research on how such VR technology may affect the experiential and practical aspects of sleep, especially outside of a clinical lab setting. We consider this research gap through the lens of individuals that voluntarily engage in the practice of sleeping in VR. Semi-structured interviews with 14 participants that have slept in VR reveal insights regarding the motivations, actions, and experiential factors that uniquely define this practice. We find that participant motives can be largely categorized through either the experiential or social affordances of VR. We tie these motives into findings regarding the unique customs of sleeping in VR, involving set-up both within the physical and virtual space. Finally, we identify current and future challenges for sleeping in VR, and propose prospective design directions.
Are You Killing Time? Predicting Smartphone Users’ Time-killing Moments via Fusion of Smartphone Sensor Data and Screenshots
Yu-Chun Chen (National Yang Ming Chiao Tung University, Hsinchu, Taiwan)Yu-Jen Lee (National Yang Ming Chiao Tung University , Hsinchu, Taiwan)Kuei-Chun Kao (National Yang Ming Chiao Tung Univeristy, Hsinchu, Taiwan)Jie Tsai (National Yang Ming Chiao Tung University, Hsinchu, Taiwan)En-Chi Liang (National Yang Ming Chiao Tung University, Hsinchu,Taiwan, Taiwan)Wei-Chen Chiu (National Chiao Tung University, Hsinchu City, Taiwan)Faye Shih (Bryn Mawr College, Bryn Mawr, Pennsylvania, United States)Yung-Ju Chang (National Yang Ming Chiao Tung University, Hsinchu, Taiwan)
Time-killing on smartphones has become a pervasive activity, and could be opportune for delivering content to their users. This research is believed to be the first attempt at time-killing detection, which leverages the fusion of phone-sensor and screenshot data. We collected nearly one million user-annotated screenshots from 36 Android users. Using this dataset, we built a deep-learning fusion model, which achieved a precision of 0.83 and an AUROC of 0.72. We further employed a two-stage clustering approach to separate users into four groups according to the patterns of their phone-usage behaviors, and then built a fusion model for each group. The performance of the four models, though diverse, yielded better average precision of 0.87 and AUROC of 0.76, and was superior to that of the general/unified model shared among all users. We investigated and discussed the features of the four time-killing behavior clusters that explain why the models’ performance differ.
DAPIE: Interactive Step-by-Step Explanatory Dialogues to Answer Children’s Why and How Questions
Yoonjoo Lee (KAIST, Daejeon, Korea, Republic of)Tae Soo Kim (KAIST, Daejeon, Korea, Republic of)Sungdong Kim (NAVER AI Lab, Seongnam, Korea, Republic of)Yohan Yun (KAIST, Suwon, Gyeonggi, Korea, Republic of)Juho Kim (KAIST, Daejeon, Korea, Republic of)
Children acquire an understanding of the world by asking "why'' and "how'' questions. Conversational agents (CAs) like smart speakers or voice assistants can be promising respondents to children's questions as they are more readily available than parents or teachers. However, CAs' answers to "why'' and "how'' questions are not designed for children, as they can be difficult to understand and provide little interactivity to engage the child. In this work, we propose design guidelines for creating interactive dialogues that promote children's engagement and help them understand explanations. Applying these guidelines, we propose DAPIE, a system that answers children's questions through interactive dialogue by employing an AI-based pipeline that automatically transforms existing long-form answers from online sources into such dialogues. A user study (N=16) showed that, with DAPIE, children performed better in an immediate understanding assessment while also reporting higher enjoyment than when explanations were presented sentence-by-sentence.
This Watchface Fits with my Tattoos: Investigating Customisation Needs and Preferences in Personal Tracking
Ruben Gouveia (University of Twente, Enschede, Netherlands)Daniel A.. Epstein (University of California, Irvine, Irvine, California, United States)
People engage in self-tracking with diverse data collection and visualisation needs and preferences. Customisable self-tracking tools offer the potential to support individualized preferences by letting people make changes to the aesthetics and functionality of tracker displays. In this paper, we use the customisation options offered by the displays of commercial fitness smartwatches as a lens to investigate when, why and how 386 self-trackers engage in customisations in their daily lives. We find that people largely customise their trackers' display frequently, multiple times a day, or not at all, with frequent customisations reflecting situational data, aesthetic and personal meaning needs. We discuss implications for the design of tracking tools aiming to support customisation and discuss the utility of customisations towards goal scaffolding and maintaining interest in tracking.
GAM Coach: Towards Interactive and User-centered Algorithmic Recourse
Zijie J.. Wang (Georgia Tech, Atlanta, Georgia, United States)Jennifer Wortman Vaughan (Microsoft Research, New York, New York, United States)Rich Caruana (Microsoft Research, Redmond, Washington, United States)Duen Horng Chau (Georgia Tech, Atlanta, Georgia, United States)
Machine learning (ML) recourse techniques are increasingly used in high-stakes domains, providing end users with actions to alter ML predictions, but they assume ML developers understand what input variables can be changed. However, a recourse plan's actionability is subjective and unlikely to match developers' expectations completely. We present GAM Coach, a novel open-source system that adapts integer linear programming to generate customizable counterfactual explanations for Generalized Additive Models (GAMs), and leverages interactive visualizations to enable end users to iteratively generate recourse plans meeting their needs. A quantitative user study with 41 participants shows our tool is usable and useful, and users prefer personalized recourse plans over generic plans. Through a log analysis, we explore how users discover satisfactory recourse plans, and provide empirical evidence that transparency can lead to more opportunities for everyday users to discover counterintuitive patterns in ML models. GAM Coach is available at: https://poloclub.github.io/gam-coach/.
TicTacToes: Assessing Toe Movements as an Input Modality
Florian Müller (LMU Munich, Munich, Germany)Daniel Schmitt (TU Darmstadt, Darmstadt, Germany)Andrii Matviienko (Technical University of Darmstadt, Darmstadt, Germany)Dominik Schön (TU Darmstadt, Darmstadt, Germany)Sebastian Günther (Technical University of Darmstadt, Darmstadt, Germany)Thomas Kosch (HU Berlin, Berlin, Germany)Martin Schmitz (Saarland University, Saarbrücken, Germany)
From carrying grocery bags to holding onto handles on the bus, there are a variety of situations where one or both hands are busy, hindering the vision of ubiquitous interaction with technology. Voice commands, as a popular hands-free alternative, struggle with ambient noise and privacy issues. As an alternative approach, research explored movements of various body parts (e.g., head, arms) as input modalities, with foot-based techniques proving particularly suitable for hands-free interaction. Whereas previous research only considered the movement of the foot as a whole, in this work, we argue that our toes offer further degrees of freedom that can be leveraged for interaction. To explore the viability of toe-based interaction, we contribute the results of a controlled experiment with 18 participants assessing the impact of five factors on the accuracy, efficiency and user experience of such interfaces. Based on the findings, we provide design recommendations for future toe-based interfaces.
Empathic Accuracy and Mental Effort during Remote Assessments of Emotions
Stephan Huber (Julius-Maximilians-Universität , Würzburg , Germany)Natalie Rathß (Julius-Maximilians-Universität, Würzburg, Germany)
Observing users in remote settings is unfavorable because it adds filters altering the information that underlie judgement. Still, the COVID pandemic led to an unprecedented popularity of remote user experience tests. In this work, we revisited the question, which information is most important for evaluators to assess users’ emotions successfully and efficiently. In an online study, we asked N=55 participants to assess users’ emotions from short videos of 30 interaction situations. As independent variable, we manipulated the combination of the information channels video of users, video of the interactive technology, and audio within subjects. Our findings indicate that empathic accuracy is highest and mental effort is lowest when all stimuli are present. Surprisingly, empathic accuracy was lowest and mental effort highest, when only video of users was available. We discuss these findings in the light of emotion literature focusing on persons’ facial expressions and derive practical implications for remote observations.
Imagine That! Imaginative Suggestibility Affects Presence in Virtual Reality
Crescent Jicol (University of Bath, Bath, United Kingdom)Christopher Clarke (University of Bath, Bath, United Kingdom)Emilia Tor (University of Bath , Bath, United Kingdom)Hiu Lam Yip (University of Bath, Bath, United Kingdom)Jinha Yoon (University of Bath, Bath, Somerset, United Kingdom)Chris Bevan (University of Bristol, Bristol, United Kingdom)Hugh Bowden (King's College London, London, United Kingdom)Elisa Brann (King's College London, London, United Kingdom)Kirsten Cater (University of Bristol, Bristol, United Kingdom)Richard Cole (University of Bristol, Bristol, United Kingdom)Quinton Deeley (King's College London, London, United Kingdom)Esther Eidinow (University of Bristol , Bristol, United Kingdom)Eamonn O'Neill (University of Bath, Bath, United Kingdom)Christof Lutteroth (University of Bath, Bath, United Kingdom)Michael J. Proulx (University of Bath, Bath, United Kingdom)
Personality characteristics can affect how much presence an individual experiences in virtual reality, and researchers have explored how it may be possible to prime users to increase their sense of presence. A personality characteristic that has yet to be explored in the VR literature is imaginative suggestibility, the ability of an individual to successfully experience an imaginary scenario as if it were real. In this paper, we explore how suggestibility and priming affect presence when consulting an ancient oracle in VR as part of an educational experience -- a common VR application. We show for the first time how imaginative suggestibility is a major factor which affects presence and emotions experienced in VR, while priming cues have no effect on participants' (n=128) user experience, contrasting results from prior work. We consider the impacts of these findings for VR design and provide guidelines based on our results.
The Tactile Dimension: A Method for Physicalizing Touch Behaviors
Laura J. Perovich (Northeastern University, Boston, Massachusetts, United States)Bernice Rogowitz (Visual Perspectives Research , Westchester, New York, United States)Victoria Crabb (Northeastern University, Boston, Massachusetts, United States)Jack Vogelsang (Northeastern University, Boston, Massachusetts, United States)Sara Hartleben (Northeastern University, Boston, Massachusetts, United States)Dietmar Offenhuber (Northeastern University, Boston, Massachusetts, United States)
Traces of touch provide valuable insight into how we interact with the physical world. Measuring touch behavior, however, is expensive and imprecise. Utilizing a fluorescent UV tracer powder, we developed a low-cost analog method to capture persistent, high-contrast touch records on arbitrary objects. We describe our process for selecting a tracer, methods for capturing, enhancing, and aggregating traces, and approaches to examining qualitative aspects of the user experience. Three user studies demonstrate key features of this method. First, we show that it provides clear and durable traces on objects representative of scientific visualization, physicalization, and product design. Second, we demonstrate how this method could be used to study touch perception, by measuring how task and narrative framing elicit different touch behaviors on the same object. Third, we demonstrate how this method can be used to evaluate data physicalizations by observing how participants touch two different physicalizations of COVID-19 time-series data.
LYDSPOR: An Urban Sound Experience Weaving Together Past and Present Through Vibrating Bodies
Karin Ryding (IT University, Copenhagen, Denmark)Vasiliki Tsaknaki (IT University of Copenhagen, Copenhagen, Denmark)Stina Marie Hasse. Jørgensen (IT University of Copenhagen , Copenhagen, Denmark)Jonas Fritsch (Digital Design, Copenhagen, Denmark)
In this paper we present LYDSPOR; a site-specific sound experience, created in Elsinore, Denmark, consisting of two physical installations and an app-based soundwalk, which together allow people to feel and sense in their bodies certain narrative fragments of the history of the city. LYDSPOR is the result of a 1,5 year-long research project which had the aim of adopting soma design methods by drawing on affective interaction design and on an understanding of bodies as always multiple, relational, and never only human. Through an analysis of the design process and its outcome, the paper contributes an in-depth understanding of how to combine somatic and affective design approaches when creating site-specific sonic augmentations for historical dissemination in public space.
It is Okay to be Distracted: How Real-time Transcriptions Facilitate Online Meeting with Distraction
Seoyun Son (KAIST, Daejeon, Korea, Republic of)Junyoung Choi (KAIST, Daejeon, Korea, Republic of)Sunjae Lee (KAIST, Daejeon, Korea, Republic of)Jean Y. Song (DGIST, Daegu, Korea, Republic of)Insik Shin (KAIST, Daejeon, Korea, Republic of)
Online meetings are indispensable in collaborative remote work environments, but they are vulnerable to distractions due to their distributed and location-agnostic nature. While distraction often leads to a decrease in online meeting quality due to loss of engagement and context, natural multitasking has positive tradeoff effects, such as increased productivity within a given time unit. In this study, we investigate the impact of real-time transcriptions (i.e., full-transcripts, summaries, and keywords) as a solution to help facilitate online meetings during distracting moments while still preserving multitasking behaviors. Through two rounds of controlled user studies, we qualitatively and quantitatively show that people can better catch up with the meeting flow and feel less interfered with when using real-time transcriptions. The benefits of real-time transcriptions were more pronounced after distracting activities. Furthermore, we reveal additional impacts of real-time transcriptions (e.g., supporting recalling contents) and suggest design implications for future online meeting platforms where these could be adaptively provided to users with different purposes.
Studying the Effect of AI Code Generators on Supporting Learners in Introductory Programming
Majeed Kazemitabaar (University of Toronto, Toronto, Ontario, Canada)Justin Chow (University of Toronto, Toronto, Ontario, Canada)Carl Ka To. Ma (University of Toronto, Toronto, Ontario, Canada)Barbara J.. Ericson (University of Michigan, Ann Arbor, Michigan, United States)David Weintrop (University of Maryland, College Park, Maryland, United States)Tovi Grossman (University of Toronto, Toronto, Ontario, Canada)
AI code generators like OpenAI Codex have the potential to assist novice programmers by generating code from natural language descriptions, however, over-reliance might negatively impact learning and retention. To explore the implications that AI code generators have on introductory programming, we conducted a controlled experiment with 69 novices (ages 10-17). Learners worked on 45 Python code-authoring tasks, for which half of the learners had access to Codex, each followed by a code-modification task. Our results show that using Codex significantly increased code-authoring performance (1.15x increased completion rate and 1.8x higher scores) while not decreasing performance on manual code-modification tasks. Additionally, learners with access to Codex during the training phase performed slightly better on the evaluation post-tests conducted one week later, although this difference did not reach statistical significance. Of interest, learners with higher Scratch pre-test scores performed significantly better on retention post-tests, if they had prior access to Codex.
VRGit: A Version Control System for Collaborative Content Creation in Virtual Reality
Lei Zhang (University of Michigan, Ann Arbor, Michigan, United States)Ashutosh Agrawal (University of Michigan, Ann Arbor, Michigan, United States)Steve Oney (University of Michigan, Ann Arbor, Michigan, United States)Anhong Guo (University of Michigan, Ann Arbor, Michigan, United States)
Immersive authoring tools allow users to intuitively create and manipulate 3D scenes while immersed in Virtual Reality (VR). Collaboratively designing these scenes is a creative process that involves numerous edits, explorations of design alternatives, and frequent communication with collaborators. Version Control Systems (VCSs) help users achieve this by keeping track of the version history and creating a shared hub for communication. However, most VCSs are unsuitable for managing the version history of VR content because their underlying line differencing mechanism is designed for text and lacks the semantic information of 3D content; and the widely adopted commit model is designed for asynchronous collaboration rather than real-time awareness and communication in VR. We introduce VRGit, a new collaborative VCS that visualizes version history as a directed graph composed of 3D miniatures, and enables users to easily navigate versions, create branches, as well as preview and reuse versions directly in VR. Beyond individual uses, VRGit also facilitates synchronous collaboration in VR by providing awareness of users’ activities and version history through portals and shared history visualizations. In a lab study with 14 participants (seven groups), we demonstrate that VRGit enables users to easily manage version history both individually and collaboratively in VR.
Tailoring a Persuasive Game to Promote Secure Smartphone Behaviour
Anirudh Ganesh (Dalhousie University, Halifax, Nova Scotia, Canada)Chinenye Ndulue (Dalhousie University, Canada, Halifax, Nova Scotia, Canada)Rita Orji (Dalhousie University, Halifax, Nova Scotia, Canada)
The use of smartphones has become an integral part of everyone’s lives. Due to the ubiquitous nature and multiple functionalities of smartphones, the data handled by these devices are sensitive in nature. Despite the measures companies take to protect users’ data, research has shown that people do not take the necessary actions to stay safe from security and privacy threats. Persuasive games have been implemented across various domains to motivate people towards a positive behaviour change. Even though persuasive games could be effective, research has shown that the one-size-fits-all approach to designing persuasive games might not be as effective as the tailored versions of the game. This paper presents the design and evaluation of a persuasive game to improve user awareness about smartphone security and privacy tailored to the user’s motivational orientation using Regulatory Focus Theory. From the results of our mixed-methods in-the-wild study of 102 people followed by a one-on-one interview of 25 people, it is evident that the tailored version of the persuasive game performed better than the non-tailored version of the game towards improving users’ secure smartphone behaviour. We contribute to the broader HCI community by offering design suggestions and the benefits of tailoring persuasive games.
Reality Rifts: Wonder-ful Interfaces by Disrupting Perceptual Causality
Lung-Pan Cheng (National Taiwan University, Taipei, Taiwan)Yi Chen (National Taiwan University, Taipei, Taiwan)Yi-Hao Peng (Carnegie Mellon University, Pittsburgh, Pennsylvania, United States)Christian Holz (ETH Zürich, Zurich, Switzerland)
Reality Rifts are interfaces between the physical and the virtual reality, where incoherent observations of physical behavior lead users to imagine comprehensive and plausible end-to-end dynamics. Reality Rifts emerge in interactive physical systems that lack one or more components that are central to their operation, yet where the physical end-to-end interaction persists with plausible outcomes. Even in the presence of a Reality Rift, users can still interact with a system—much like they would with the unaltered and complete counterpart—leading them to implicitly infer the existence and imagine the behavior of the lacking components from observable phenomena and outcomes. Therefore, dynamic systems with Reality Rifts trigger doubt, curiosity, and rumination—a sense of wonder that users experience when observing a Reality Rift due to their innate curiosity. In this paper, we explore how interactive systems can elicit and guide the user's imagination by integrating Reality Rifts. We outline the design process for opening a Reality Rift in interactive physical systems, describe the resulting design space, and explore it through six characteristic prototypes. To understand to what extent and with which qualities these prototypes indeed induce a sense of wonder during an interaction, we evaluated \projectName\ in the form of a field deployment with 50 participants. We discuss participants' behavior and derive factors for the implementation of future wonder-ful experiences.