注目の論文一覧

各カテゴリ上位30論文までを表示しています

The ACM CHI Conference on Human Factors in Computing Systems (https://chi2023.acm.org/)

18
Birds of a Feather Video-Flock Together: Design and Evaluation of an Agency-Based Parrot-to-Parrot Video-Calling System for Interspecies Ethical Enrichment
Rebecca Kleinberger (Northeastern University, Boston, Massachusetts, United States)Jennifer Cunha (Parrot Kindergarten, Jupiter, Florida, United States)Megha M. Vemuri (Massachusetts Institute of Technology, Cambridge, Massachusetts, United States)Ilyena Hirskyj-Douglas (The University of Glasgow, Glasgow, United Kingdom)
Over 20 million parrots are kept as pets in the US, often lacking appropriate stimuli to meet their high social, cognitive, and emotional needs. After reviewing bird perception and agency literature, we developed an approach to allow parrots to engage in video-calling other parrots. Following a pilot experiment and expert survey, we ran a three-month study with 18 pet birds to evaluate the potential value and usability of a parrot-parrot video-calling system. We assessed the system in terms of perception, agency, engagement, and overall perceived benefits. With 147 bird-triggered calls, our results show that 1) every bird used the system, 2) most birds exhibited high motivation and intentionality, and 3) all caretakers reported perceived benefits, some arguably life-transformative, such as learning to forage or even to fly by watching others. We report on individual insights and propose considerations regarding ethics and the potential of parrot video-calling for enrichment.
12
“What if everyone is able to program?” – Exploring the Role of Software Development in Science Fiction
Kevin Krings (University of Siegen, Siegen, Germany)Nino S.. Bohn (University of Siegen, Siegen, Germany)Nora Anna Luise. Hille (University of Siegen, Siegen, Germany)Thomas Ludwig (University of Siegen, Siegen, Germany)
For decades, research around emerging technologies has been inspired by science fiction and vice versa. While so far almost only the technologies themselves have been considered, we explore the underlying software development and programming approaches. We therefore conduct a detailed media content analysis of twenty-seven movies that examines the role of software development in science fiction by identifying and investigating new approaches to programming and how software development is conceptualized portrayed within science fiction scenes. With the additional analysis of eighteen design fiction stories exploring the scenario “What if everyone is able to program?”, we envision potential impacts of the democratization of software development on business and society. Our study opens new discussions and perspectives, by investigating the current vision of the future of programming and uncovers new approaches to software development which can serve as a starting point for further research in the HCI community.
12
Horse as Teacher: How human-horse interaction informs human-robot interaction
Eakta Jain (University of Florida, Gainesville, Florida, United States)Christina Gardner-McCune (University of Florida , Gainesville, Florida, United States)
Robots are entering our lives and workplaces as companions and teammates. Though much research has been done on how to interact with robots, teach robots and improve task performance, an open frontier for HCI/HRI research is how to establish a working relationship with a robot in the first place. Studies that explore the early stages of human-robot interaction are an emerging area of research. Simultaneously, there is resurging interest in how human-animal interaction could inform human-robot interaction. We present a first examination of early stage human-horse interaction through the lens of human-robot interaction, thus connecting these two areas. Following Strauss’ approach, we conduct a thematic analysis of data from three sources gathered over a year of field work: observations, interviews and journal entries. We contribute design guidelines based on our analyses and findings.
11
CiteSee: Augmenting Citations in Scientific Papers with Persistent and Personalized Historical Context
Joseph Chee Chang (Allen Institute for AI, Seattle, Washington, United States)Amy X.. Zhang (University of Washington, Seattle, Washington, United States)Jonathan Bragg (Allen Institute for Artificial Intelligence, Seattle, Washington, United States)Andrew Head (University of Pennsylvania, Philadelphia, Pennsylvania, United States)Kyle Lo (Allen Institute for Artificial Intelligence, Seattle, Washington, United States)Doug Downey (Allen Institute for Artificial Intelligence, Seattle, Washington, United States)Daniel S. Weld (Allen Institute for Artificial Intelligence, Seattle, Washington, United States)
When reading a scholarly article, inline citations help researchers contextualize the current article and discover relevant prior work. However, it can be challenging to prioritize and make sense of the hundreds of citations encountered during literature reviews. This paper introduces CiteSee, a paper reading tool that leverages a user's publishing, reading, and saving activities to provide personalized visual augmentations and context around citations. First, CiteSee connects the current paper to familiar contexts by surfacing known citations a user had cited or opened. Second, CiteSee helps users prioritize their exploration by highlighting relevant but unknown citations based on saving and reading history. We conducted a lab study that suggests CiteSee is significantly more effective for paper discovery than three baselines. A field deployment study shows CiteSee helps participants keep track of their explorations and leads to better situational awareness and increased paper discovery via inline citation when conducting real-world literature reviews.
9
Surface I/O: Creating Devices with Functional Surface Geometry for Haptics and User Input
Yuran Ding (Carnegie Mellon University, Pittsburgh, Pennsylvania, United States)Craig Shultz (Carnegie Mellon University, Pittsburgh, Pennsylvania, United States)Chris Harrison (Carnegie Mellon University, Pittsburgh, Pennsylvania, United States)
Surface I/O is a novel interface approach that functionalizes the exterior surface of devices to provide haptic and touch sensing without dedicated mechanical components. Achieving this requires a unique combination of surface features spanning the macro-scale (5cm~1mm), meso-scale (1mm~200μm), and micro-scale (<200μm). This approach simplifies interface creation, allowing designers to iterate on form geometry, haptic feeling, and sensing functionality without the limitations of mechanical mechanisms. We believe this can contribute to the concept of "invisible ubiquitous interactivity at scale", where the simplicity and easy implementation of the technique allows it to blend with objects around us. While we prototyped our designs using 3D printers and laser cutters, our technique is applicable to mass production methods, including injection molding and stamping, enabling passive goods with new levels of interactivity.
9
Short-Form Videos Degrade Our Capacity to Retain Intentions: Effect of Context Switching On Prospective Memory
Francesco Chiossi (LMU Munich, Munich, Germany)Luke Haliburton (LMU Munich, Munich, Germany)Changkun Ou (LMU Munich, Munich, Germany)Andreas Martin. Butz (LMU Munich, Munich, Germany)Albrecht Schmidt (LMU Munich, Munich, Germany)
Social media platforms use short, highly engaging videos to catch users' attention. While the short-form video feeds popularized by TikTok are rapidly spreading to other platforms, we do not yet understand their impact on cognitive functions. We conducted a between-subjects experiment (N=60) investigating the impact of engaging with TikTok, Twitter, and YouTube while performing a Prospective Memory task (i.e., executing a previously planned action). The study required participants to remember intentions over interruptions. We found that the TikTok condition significantly degraded the users’ performance in this task. As none of the other conditions (Twitter, YouTube, no activity) had a similar effect, our results indicate that the combination of short videos and rapid context-switching impairs intention recall and execution. We contribute a quantified understanding of the effect of social media feed format on Prospective Memory and outline consequences for media technology designers to not harm the users’ memory and wellbeing.
8
What is Human-Centered about Human-Centered AI? A Map of the Research Landscape
Tara Capel (Queensland University of Technology, Brisbane, QLD, Australia)Margot Brereton (QUT, Brisbane, Brisbane, Australia)
The application of Artificial Intelligence (AI) across a wide range of domains comes with both high expectations of its benefits and dire predictions of misuse. While AI systems have largely been driven by a technology-centered design approach, the potential societal consequences of AI have mobilized both HCI and AI researchers towards researching human-centered artificial intelligence (HCAI). However, there remains considerable ambiguity about what it means to frame, design and evaluate HCAI. This paper presents a critical review of the large corpus of peer-reviewed literature emerging on HCAI in order to characterize what the community is defining as HCAI. Our review contributes an overview and map of HCAI research based on work that explicitly mentions the terms ‘human-centered artificial intelligence’ or ‘human-centered machine learning’ or their variations, and suggests future challenges and research directions. The map reveals the breadth of research happening in HCAI, established clusters and the emerging areas of Interaction with AI and Ethical AI. The paper contributes a new definition of HCAI, and calls for greater collaboration between AI and HCI research, and new HCAI constructs.
8
"What It Wants Me To Say": Bridging the Abstraction Gap Between End-User Programmers and Code-Generating Large Language Models
Michael Xieyang Liu (Microsoft Research, Cambridge, United Kingdom)Advait Sarkar (Microsoft Research, Cambridge, United Kingdom)Carina Negreanu (Microsoft Research , Cambridge, Cambridgeshire, United Kingdom)Benjamin Zorn (Microsoft Research, Redmond, Washington, United States)Jack Williams (Microsoft Research, Cambridge, United Kingdom)Neil Toronto (Microsoft Research, Cambridge, United Kingdom)Andrew D Gordon (Microsoft Research, Redmond, Washington, United States)
Code-generating large language models map natural language to code. However, only a small portion of the infinite space of naturalistic utterances is effective at guiding code generation. For non-expert end-user programmers, learning this is the challenge of abstraction matching. We examine this challenge in the specific context of data analysis in spreadsheets, in a system that maps the user's natural language query to Python code using the Codex generator, executes the code, and shows the result. We propose grounded abstraction matching, which bridges the abstraction gap by translating the code back into a systematic and predictable naturalistic utterance. In a between-subjects, think-aloud study (n=24), we compare grounded abstraction matching to an ungrounded alternative based on previously established query framing principles. We find that the grounded approach improves end-users' understanding of the scope and capabilities of the code-generating model, and the kind of language needed to use it effectively.
8
Full-hand Electro-Tactile Feedback without Obstructing Palmar Side of Hand
Yudai Tanaka (University of Chicago, Chicago, Illinois, United States)Alan Shen (University of Chicago, Chicago, Illinois, United States)Andy Kong (University of Chicago, Chicago, Illinois, United States)Pedro Lopes (University of Chicago, Chicago, Illinois, United States)
We present a technique to render tactile feedback to the palmar side of the hand while keeping it unobstructed and, thus, preserving manual dexterity during interactions with physical objects. We implement this by applying electro-tactile stimulation only to the back of the hand and to the wrist. In our approach, there are no electrodes on the palmar side, yet that is where tactile sensations are felt. While we place electrodes outside the user’s palm, we do so in strategic locations that conduct the electrical currents to the median/ulnar nerves, causing tactile sensations on the palmar side of the hand. In our user studies, we demonstrated that our approach renders tactile sensations to 11 different locations on the palmar side while keeping users’ palms free for dexterous manipulations. Our approach enables new applications such as tactile notifications during dexterous activities or VR experiences that rely heavily on physical props.
8
Quantified Canine: Inferring Dog Personality From Wearables
Lakmal Meegahapola (Idiap Research Institute, Martigny, Switzerland)Marios Constantinides (Nokia Bell Labs, Cambridge, United Kingdom)Zoran Radivojevic (Nokia Bell Labs, Cambridge, United Kingdom)Hongwei Li (Nokia Bell Labs, Cambridge, United Kingdom)Daniele Quercia (Nokia Bell Labs, Cambridge, United Kingdom)Michael S. Eggleston (Nokia Bell Labs, Murray Hill, New Jersey, United States)
Being able to assess dog personality can be used to, for example, match shelter dogs with future owners, and personalize dog activities. Such an assessment typically relies on experts or psychological scales administered to dog owners, both of which are costly. To tackle that challenge, we built a device called ``Patchkeeper'' that can be strapped on the pet's chest and measures activity through an accelerometer and a gyroscope. In an in-the-wild deployment involving 12 healthy dogs, we collected 1300 hours of sensor activity data and dog personality test results from two validated questionnaires. By matching these two datasets, we trained ten machine learning classifiers that predicted dog personality from activity data, achieving AUCs in [0.63-0.90], suggesting the value of tracking psychological signals of pets using wearable technologies.
8
XAIR: A Framework of Explainable AI in Augmented Reality
Xuhai Xu (Reality Labs Research, Redmond, Washington, United States)Anna Yu (Reality Labs Research, Redmond, Washington, United States)Tanya R.. Jonker (Facebook Reality Labs: Research, Redmond, Washington, United States)Kashyap Todi (Reality Labs Research, Redmond, Washington, United States)Feiyu Lu (Reality Labs Research, Redmond, Washington, United States)Xun Qian (Reality Labs Research, Redmond, Washington, United States)João Marcelo. Evangelista Belo (Reality Lab Research, Redmond, Washington, United States)Tianyi Wang (Reality Labs Research, Redmond, Washington, United States)Michelle Li (Reality Labs Research, Redmond, Washington, United States)Aran Mun (Reality Labs Research, Redmond, Washington, United States)Te-Yen Wu (Reality Labs Research, Redmond, Washington, United States)Junxiao Shen (Reality Labs Research, Redmond, Washington, United States)Ting Zhang (Meta Inc., Redmond, Washington, United States)Narine Kokhlikyan (Facebook, Menlo Park, California, United States)Fulton Wang (Reality Labs Research, Redmond, Washington, United States)Paul Sorenson (Reality Labs Research, Redmond, Washington, United States)Sophie Kim (Facebook Reality Labs, Redmond, Washington, United States)Hrvoje Benko (Meta, Redmond, Washington, United States)
Explainable AI (XAI) has established itself as an important component of AI-driven interactive systems. With Augmented Reality (AR) becoming more integrated in daily lives, the role of XAI also becomes essential in AR because end-users will frequently interact with intelligent services. However, it is unclear how to design effective XAI experiences for AR. We propose XAIR, a design framework that addresses when, what, and how to provide explanations of AI output in AR. The framework was based on a multi-disciplinary literature review of XAI and HCI research, a large-scale survey probing 500+ end-users’ preferences for AR-based explanations, and three workshops with 12 experts collecting their insights about XAI design in AR. XAIR's utility and effectiveness was verified via a study with 10 designers and another study with 12 end-users. XAIR can provide guidelines for designers, inspiring them to identify new design opportunities and achieve effective XAI designs in AR.
8
Affective Profile Pictures: Exploring the Effects of Changing Facial Expressions in Profile Pictures on Text-Based Communication
Chi-Lan Yang (The University of Tokyo, Tokyo, Japan)Shigeo Yoshida (The University of Tokyo, Tokyo, Japan)Hideaki Kuzuoka (The University of Tokyo, Bunkyo-ku, Tokyo, Japan)Takuji Narumi (the University of Tokyo, Tokyo, Japan)Naomi Yamashita (NTT, Keihanna, Japan)
When receiving text messages from unacquainted colleagues in fully remote workplaces, insufficient mutual understanding and limited social cues can lead people to misinterpret the tone of the message and further influence their impression of remote colleagues. Emojis have been commonly used for supporting expressive communication; however, people seldom use emojis before they become acquainted with each other. Hence, we explored how changing facial expressions in profile pictures could be an alternative channel to communicate socio-emotional cues. By conducting an online controlled experiment with 186 participants, we established that changing facial expressions of profile pictures can influence the impression of the message receivers toward the sender and the message valence when receiving neutral messages. Furthermore, presenting incongruent profile pictures to positive messages negatively affected the interpretation of the message valence, but did not have much effect on negative messages. We discuss the implications of affective profile pictures in supporting text-based communication.
8
BubbleTex: Designing Heterogenous Wettable Areas for Carbonation Bubble Patterns on Surfaces
Harpreet Sareen (The University of Tokyo, Tokyo, Japan)Yibo Fu (The New School, New York, New York, United States)Nour Boulahcen (Telecom ParisTech, Paris, France)Yasuaki Kakehi (The University of Tokyo, Tokyo, Japan)
Materials are a key part of our daily experiences. Recently, researchers have been devising new ways to utilize materials directly from our physical world for the design of objects and interactions. We present a new fabrication technique that enables control of CO2 bubble positions and their size within carbonated liquids. Instead of soap bubbles, boiling water, or droplets, we show creation of patterns, images and text through sessile bubbles that exhibit a lifetime of several days. Surfaces with mixed wettability regions are created on glass and plastic using ceramic coatings or plasma projection leading to patterns that are relatively invisible to the human eye. Different regions react to liquids differently. Nucleation is activated after carbonated liquid is poured onto the surface with bubbles nucleating in hydrophobic regions with a strong adherence to the surface and can be controlled in size ranging from 0.5mm – 6.5mm. Bubbles go from initially popping or becoming buoyant during CO2 supersaturation to stabilizing at their positions within minutes. Technical evaluation shows stabilization under various conditions. Our design software allows users to import images and convert them into parametric pixelation forms conducive to fabrication that will result in nucleation of bubbles at required positions. Various applications are presented to demonstrate aspects that may be harnessed for a wide range of use in daily life. Through this work, we enable the use of carbonation bubbles as a new design material for designers and researchers.
8
Lyric App Framework: A Web-based Framework for Developing Interactive Lyric-driven Musical Applications
Jun Kato (National Institute of Advanced Industrial Science and Technology (AIST), Tsukuba, Japan)Masataka Goto (National Institute of Advanced Industrial Science and Technology (AIST), Tsukuba, Japan)
Lyric videos have become a popular medium to convey lyrical content to listeners, but they present the same content whenever they are played and cannot adapt to listeners' preferences. Lyric apps, as we name them, are a new form of lyric-driven visual art that can render different lyrical content depending on user interaction and address the limitations of static media. To open up this novel design space for programmers and musicians, we present Lyric App Framework, a web-based framework for building interactive graphical applications that play musical pieces and show lyrics synchronized with playback. We designed the framework to provide a streamlined development experience for building production-ready lyric apps with creative coding libraries of choice. We held programming contests twice and collected 52 examples of lyric apps, enabling us to reveal eight representative categories, confirm the framework's effectiveness, and report lessons learned.
7
Smooth as - The Effects of Frame Rate Variation on Game Player Quality of Experience
Shengmei Liu (Worcester Polytechnic Institute, Worcester, Massachusetts, United States)Atsuo Kuwahara (Intel Corporation, Hillsboro, Oregon, United States)James J. Scovell (Intel Corporation, Hillsboro, Oregon, United States)Mark Claypool (WPI, Worcester, Massachusetts, United States)
For gamers, high frame rates are important for a smooth visual display and good quality of experience (QoE). However, high frame rates alone are not enough as variations in the frame display times can degrade QoE even as the average frame rate remains high. While the impact of steady frame rates on player QoE is fairly well-studied, the effects of frame rate variation is not. This paper presents a 33-person user study that evaluates the impact of frame rate variation on users playing three different computer games. Analysis of the results shows average frame rate alone is a poor predictor of QoE, and frame rate variation has a significant impact on player QoE. While the standard deviation of frame times is promising as a general predictor for QoE, frame time standard deviation may not be accurate for all individual games. However, 95% frame rate floor -– the bottom 5% of frame rates the player experiences –- appears to be an effective predictor of both QoE overall and for the individual games tested.
7
How to Communicate Robot Motion Intent: A Scoping Review
Max Pascher (Westphalian University of Applied Sciences, Gelsenkirchen, NRW, Germany)Uwe Gruenefeld (University of Duisburg-Essen, Essen, Germany)Stefan Schneegass (University of Duisburg-Essen, Essen, Germany)Jens Gerken (Westphalian University of Applied Sciences, Gelsenkirchen, Germany)
Robots are becoming increasingly omnipresent in our daily lives, supporting us and carrying out autonomous tasks. In Human-Robot Interaction, human actors benefit from understanding the robot's motion intent to avoid task failures and foster collaboration. Finding effective ways to communicate this intent to users has recently received increased research interest. However, no common language has been established to systematize robot motion intent. This work presents a scoping review aimed at unifying existing knowledge. Based on our analysis, we present an intent communication model that depicts the relationship between robot and human through different intent dimensions (intent type, intent information, intent location). We discuss these different intent dimensions and their interrelationships with different kinds of robots and human roles. Throughout our analysis, we classify the existing research literature along our intent communication model, allowing us to identify key patterns and possible directions for future research.
7
Engaging Passers-by with Rhythm: Applying Feedforward Learning to a Xylophonic Media Architecture Facade
Alex Binh Vinh Duc Nguyen (KU Leuven, Leuven, Belgium)Jihae Han (KU Leuven, Leuven, Belgium)Maarten Houben (Eindhoven University of Technology, Eindhoven, Netherlands)Yssmin Bayoumi (KU Leuven, Leuven, Belgium)Andrew Vande Moere (KU Leuven, Leuven, Belgium)
Media architecture exploits interactive technology to encourage passers-by to engage with an architectural environment. Whereas most media architecture installations focus on visual stimulation, we developed a permanent media facade that rhythmically knocks xylophone blocks embedded beneath 11 window sills, according to the human actions constantly traced via an overhead camera. In an attempt to overcome its apparent limitations in engaging passers-by more enduringly and purposefully, our study investigates the impact of feedforward learning, a constructive interaction method that instructs passers-by about the results of their actions. Based on a comparative (n=25) and a one-month in-the-wild (n=1877) study, we propose how feedforward learning could empower passers-by to understand the interaction of more abstract types of media architecture, and how particular quantitative indicators capturing this learning could predict how enduringly and purposefully a passer-might engage. We believe these contributions could inspire more creative integrations of non-visual modalities in future public interactive interventions.
7
Bias-Aware Systems: Exploring Indicators for the Occurrences of Cognitive Biases when Facing Different Opinions
Nattapat Boonprakong (University of Melbourne, Parkville, Victoria, Australia)Xiuge Chen (The University of Melbourne, Melbourne, Victoria, Australia)Catherine Davey (University of Melbourne, Parkville, Victoria, Australia)Benjamin Tag (University of Melbourne, Melbourne, Victoria, Australia)Tilman Dingler (University of Melbourne, Melbourne, Victoria, Australia)
Cognitive biases have been shown to play a critical role in creating echo chambers and spreading misinformation. They undermine our ability to evaluate information and can influence our behaviour without our awareness. To allow the study of occurrences and effects of biases on information consumption behaviour, we explore indicators for cognitive biases in physiological and interaction data. Therefore, we conducted two experiments investigating how people experience statements that are congruent or divergent from their own ideological stance. We collected interaction data, eye tracking data, hemodynamic responses, and electrodermal activity while participants were exposed to ideologically tainted statements. Our results indicate that people spend more time processing statements that are incongruent with their own opinion. We detected differences in blood oxygenation levels between congruent and divergent opinions, a first step towards building systems to detect and quantify cognitive biases.
7
Here and Now: Creating Improvisational Dance Movements with a Mixed Reality Mirror
Qiushi Zhou (University of Melbourne, Melbourne, Victoria, Australia)Louise Grebel (Université Paris-Saclay, Orsay, France)Andrew Irlitti (University of Melbourne, Melbourne, Australia)Julie Ann Minaai (The University of Melbourne, Southbank, Australia)Jorge Goncalves (University of Melbourne, Melbourne, Australia)Eduardo Velloso (University of Melbourne, Melbourne, Victoria, Australia)
This paper explores using mixed reality (MR) mirrors for supporting improvisational dance making. Motivated by the prevalence of mirrors in dance studios and inspired by Forsythe’s Improvisation Technologies, we conducted workshops with 13 dancers and choreographers to inform the design of future MR visualisation and annotation tools for dance. The workshops involved using a prototype MR mirror as a technology probe that reveals the spatial and temporal relationships between the reflected dancing body and its surroundings during improvisation; speed dating group interviews around future design ideas; follow-up surveys and extended interviews with a digital media dance artist and a dance educator. Our findings highlight how the MR mirror enriches dancers' temporal and spatial perception, creates multi-layered presence, and affords appropriation by dancers. We also discuss the unique place of MR mirrors in the theoretical context of dance and in the history of movement visualisation, and distil lessons for broader HCI research.
6
Exploring Memory-Oriented Interactions with Digital Photos In and Across Time: A Field Study of Chronoscope
Amy Yo Sue Chen (Simon Fraser University, Surrey, British Columbia, Canada)William Odom (Simon Fraser University, Surrey, British Columbia, Canada)Carman Neustaedter (Simon Fraser University, Surrey, British Columbia, Canada)Ce Zhong (Simon Fraser University, Surrey, British Columbia, Canada)Henry Lin (Simon Fraser University, Surrey, British Columbia, Canada)
We describe a field study of Chronoscope, a tangible photo viewer that lets people revisit and explore their digital photos with the support of temporal metadata. Chronoscope offers different temporal modalities for organizing one’s personal digital photo archive, and for exploring possible connections in and across time, and among photos and memories. We deployed four Chronoscopes in four households for three months to understand participants’ experiences over time. Our goals are to investigate the reflective potential of temporal modalities as an alternative design approach for supporting memory-oriented photo exploration, and empirically explore conceptual propositions related to slow technology. Findings revealed that Chronoscope catalyzed a range of reflective experiences on their respective life histories and life stories. It opened up alternative ways of considering time and the potential longevity of personal photo archives. We conclude with implications to present opportunities for future HCI research and practice.
6
Log-it: Supporting Programming with Interactive, Contextual, Structured, and Visual Logs
Peiling Jiang (University of California San Diego, San Diego, California, United States)Fuling Sun (University of California San Diego, San Diego, California, United States)Haijun Xia (University of California, San Diego, San Diego, California, United States)
Logging is a widely used technique for inspecting and understanding programs. However, the presentation of logs still often takes its ancient form of a linear stream of text that resides in a terminal, console, or log file. Despite its simplicity, interpreting log output is often challenging due to the large number of textual logs that lack structure and context. We conducted content analysis and expert interviews to understand the practices and challenges inherent in logging. These activities demonstrated that the current representation of logs does not provide the rich structures programmers need to interpret them or the program's behavior. We present Log-it, a logging interface that enables programmers to interactively structure and visualize logs in situ. A user study with novices and experts showed that Log-it's syntax and interface have a minimal learning curve, and the interactive representations and organizations of logs help programmers easily locate, synthesize, and understand logs.
6
Co-Designing with Early Adolescents: Understanding Perceptions of and Design Considerations for Tech-Based Mediation Strategies that Promote Technology Disengagement
Ananta Chowdhury (University of Manitoba, Winnipeg, Manitoba, Canada)Andrea Bunt (University of Manitoba, Winnipeg, Manitoba, Canada)
Children’s excessive use of technology is a growing concern, and despite taking various measures, parents often find it difficult to limit their children’s device use. Limiting tech usage can be especially challenging with early adolescents as they start to develop a sense of autonomy. While numerous tech-based mediation solutions exist, in this paper, we aim to learn from early adolescents directly by having them contribute to co-design activities. Through a multi-session, group-based, online co-design study with 21 early adolescents (ages 11-14), we explore their perceptions towards tech overuse and what types of solutions they propose to help with disengagement. Findings from these co-design sessions contribute insights into how the participants conceptualized the problem of tech overuse, how they envisioned appropriate mediation strategies, and important design considerations. We also reflect on our study methods, which encouraged active participation from our participants and facilitated valuable contributions during the online co-design sessions.
6
ChallengeDetect: Investigating the Potential of Detecting In-Game Challenge Experience from Physiological Measures
Xiaolan Peng (Institute of software,Chinese Academy of Sciences, Beijing, -Select-, China)Xurong Xie (Institute of Software, Chinese Academy of Science, Beijing, China)Jin Huang (Chinese Academy of Sciences, Beijing, China)Chutian Jiang (Computational Media and Arts Thrust, Guangzhou, China)Haonian Wang (Department of Artificial Intelligence, Beijing, China)Alena Denisova (University of York, York, United Kingdom)Hui Chen (Institute of Software, Chinese Academy of Sciences, Beijing, China)Feng Tian (Institute of software, Chinese Academy of Sciences, Beijing, China)Hongan Wang (Institute of Software, Chinese Academy of Sciences, Beijing, China)
Challenge is the core element of digital games. The wide spectrum of physical, cognitive, and emotional challenge experiences provided by modern digital games can be evaluated subjectively using a questionnaire, the CORGIS, which allows for a post hoc evaluation of the overall experience that occurred during game play. Measuring this experience dynamically and objectively, however, would allow for a more holistic view of the moment-to-moment experiences of players. This study, therefore, explored the potential of detecting perceived challenge from physiological signals. For this, we collected physiological responses from 32 players who engaged in three typical game scenarios. Using perceived challenge ratings from players and extracted physiological features, we applied multiple machine learning methods and metrics to detect challenge experiences. Results show that most methods achieved a detection accuracy of around 80%. We discuss in-game challenge perception, challenge-related physiological indicators and AI-supported challenge detection to inform future work on challenge evaluation.
6
Speech-Augmented Cone-of-Vision for Exploratory Data Analysis
Riccardo Bovo (Imperial College London, London, United Kingdom)Daniele Giunchi (University College London, London, United Kingdom)Ludwig Sidenmark (Lancaster University, Lancaster, United Kingdom)Joshua Newn (Lancaster University, Lancaster, Lancashire, United Kingdom)Hans Gellersen (Aarhus University, Aarhus, Denmark)Enrico Costanza (UCL Interaction Centre, London, United Kingdom)Thomas Heinis (Imperial College, London, United Kingdom)
Mutual awareness of visual attention is crucial for successful collaboration. Previous research has explored various ways to represent visual attention, such as field-of-view visualizations and cursor visualizations based on eye-tracking, but these methods have limitations. Verbal communication is often utilized as a complementary strategy to overcome such disadvantages. This paper proposes a novel method that combines verbal communication with the Cone of Vision to improve gaze inference and mutual awareness in VR. We conducted a within-group study with pairs of participants who performed a collaborative analysis of data visualizations in VR. We found that our proposed method provides a better approximation of eye gaze than the approximation provided by head direction. Furthermore, we release the first collaborative head, eyes, and verbal behaviour dataset. The results of this study provide a foundation for investigating the potential of verbal communication as a tool for enhancing visual cues for joint attention.
6
Drifting Off in Paradise: Why People Sleep in Virtual Reality
Michael Yin (University of British Columbia, Vancouver, British Columbia, Canada)Robert Xiao (University of British Columbia, Vancouver, British Columbia, Canada)
Sleep is important for humans, and past research has considered methods of improving sleep through technologies such as virtual reality (VR). However, there has been limited research on how such VR technology may affect the experiential and practical aspects of sleep, especially outside of a clinical lab setting. We consider this research gap through the lens of individuals that voluntarily engage in the practice of sleeping in VR. Semi-structured interviews with 14 participants that have slept in VR reveal insights regarding the motivations, actions, and experiential factors that uniquely define this practice. We find that participant motives can be largely categorized through either the experiential or social affordances of VR. We tie these motives into findings regarding the unique customs of sleeping in VR, involving set-up both within the physical and virtual space. Finally, we identify current and future challenges for sleeping in VR, and propose prospective design directions.
6
“I normally wouldn't talk with strangers”: Introducing a Socio-Spatial Interface for Fostering Togetherness Between Strangers
Ge Guo (Cornell University, Ithaca, New York, United States)Gilly Leshed (Cornell University, Ithaca, New York, United States)Keith Evan. Green (Cornell University, Ithaca, New York, United States)
Interacting with strangers can be beneficial but also challenging. Fortunately, these challenges can lead to design opportunities. In this paper, we present the design and evaluation of a socio-spatial interface, SocialStools, that leverages the human propensity for embodied interaction to foster togetherness between strangers. SocialStools is an installation of three responsive stools on caster wheels that generate sound and imagery in the near environment as three strangers sit on them, move them, and rotate them relative to each other. In our study with 12 groups of three strangers, we found a sense of togetherness emerged through interaction, evidenced by different patterns of socio-spatial movements, verbal communication, non-verbal behavior, and interview responses. We present our findings, articulate reasons for the cultivation of togetherness, consider the unique social affordances of our spatial interface in shifting attention during interpersonal communication, and provide design implications. This research contributes insights toward designing cyber-physical interfaces that foster interaction and togetherness among strangers at a time when cultivating togetherness is especially critical.
6
QButterfly: Lightweight Survey Extension for Online User-Interaction Studies for Non-Tech-Savvy Researchers
Nico Ebert (ZHAW School of Management and Law, Winterthur, Zurich, Switzerland)Björn Scheppler (ZHAW School of Management and Law, Winterthur, Switzerland)Kurt Alexander. Ackermann (ZHAW School of Management and Law, Winterthur, Zurich, Switzerland)Tim Geppert (Institut für Wirtschaftsinformatik, Winterthur, Switzerland)
We provide a user-friendly, flexible, and lightweight open-source HCI toolkit (github.com/QButterfly) that allows non-tech-savvy researchers to conduct online user interaction studies using the widespread Qualtrics and LimeSurvey platforms. These platforms already provide rich functionality (e.g., for experiments or usability tests) and therefore lend themselves to an extension to display stimulus web pages and record clickstreams. The toolkit consists of a survey template with embedded JavaScript, a JavaScript library embedded in the HTML web pages, and scripts to analyze the collected data. No special programming skills are required to set up a study or match survey data and user interaction data after data collection. We empirically validated the software in a laboratory and a field study. We conclude that this extension, even in its preliminary version, has the potential to make online user interaction studies (e.g., with crowdsourced participants) accessible to a broader range of researchers.
6
Fingerhints: Understanding Users' Perceptions of and Preferences for On-Finger Kinesthetic Notifications
Adrian-Vasile Catană (Ștefan cel Mare University of Suceava, Suceava, Romania)Radu-Daniel Vatavu (Ștefan cel Mare University of Suceava, Suceava, Romania)
We present "fingerhints," on-finger kinesthetic feedback represented by hyper-extension movements of the index finger, bypassing user agency, for notifications delivery. To this end, we designed a custom-made finger-augmentation device, which leverages mechanical force to deliver fingerhints as programmable hyper-extensions of the index finger. We evaluate fingerhints with 21 participants, and report good usability, low technology creepiness, and moderate to high social acceptability. In a second study with 11 new participants, we evaluate the wearable comfort of our fingerhints device against four commercial finger- and hand-augmentation devices. Finally, we present insights from the experience of one participant, who wore our device for eight hours during their daily life. We discuss the user experience of fingerhints in relation to our participants' personality traits, finger dexterity levels, and general attitudes toward notifications, and present implications for interactive systems leveraging on-finger kinesthetic feedback for on-body computing.
6
Thermotion: Design and fabrication of thermofluidic composites for animation effects on object surfaces
Tianyu Yu (Tsinghua University, Beijing, China)Weiye Xu (Tsinghua University, Beijing, China)Haiqing Xu (Tsinghua University, beijing, -Select-, China)Guanhong Liu (Tsinghua University, Beijing, China)Chang Liu (Tsinghua University, BeiJing, China)Guanyun Wang (Zhejiang University, Hangzhou, China)Haipeng Mi (Tsinghua University, Beijing, China)
We introduce Thermotion, a novel method using thermofluidic composites to design and display thermochromic animation effects on object surfaces. With fluidic channels embedded under the object surfaces, the composites utilize thermofluidic flows to dynamically control the surface temperature as an actuator for thermochromic paints, which enables researchers and designers for the first time to create animations not only on two and three-dimensional surfaces but also on the surface made of a few flexible everyday materials. We report the design space with six animation primitives and two modification effects, and we demonstrate the design and fabrication workflow with a customized software platform for design and simulation. A range of applications is shown leveraging the objects' dynamic displays both visually and thermally, including dynamic artifacts, teaching aids, and ambient displays. We envision an opportunity to extend thermofluidic composites to other heat-related practices for further dynamic and programmable interactions with temperature.
6
How Instructional Data Physicalization Fosters Reflection in Personal Informatics
Marit Bentvelzen (Utrecht University, Utrecht, Netherlands)Julia Dominiak (Lodz University of Technology, Łódź, Poland)Jasmin Niess (University of St. Gallen, St. Gallen, Switzerland)Frederique Henraat (Utrecht University, Utrecht, Netherlands)Paweł W. Woźniak (Chalmers University of Technology, Gothenburg, Sweden)
The ever-increasing number of devices quantifying our lives offers a perspective of high awareness of one's wellbeing, yet it remains a challenge for personal informatics (PI) to effectively support data-based reflection. Effective reflection is recognised as a key factor for PI technologies to foster wellbeing. Here, we investigate whether building tangible representations of health data can offer engaging and reflective experiences. We conducted a between-subjects study where n=60 participants explored their immediate blood pressure data in relation to medical norms. They either used a standard mobile app, built a data representation from LEGO bricks based on instructions, or completed a free-form brick build. We found that building with instructions fostered more comparison and using bricks fostered focused attention. The free-form condition required extra time to complete, and lacked usability. Our work shows that designing instructional physicalisation experiences for PI is a means of improving engagement and understanding of personal data.
6
Breaking Out of the Ivory Tower: A Large-scale Analysis of Patent Citations to HCI Research
Hancheng Cao (Stanford University, Stanford, California, United States)Yujie Lu (NLP, Santa Barbara, California, United States)Yuting Deng (Carnegie Mellon University, Pittsburgh, Pennsylvania, United States)Daniel McFarland (Stanford University, Stanford, California, United States)Michael S.. Bernstein (Stanford University, Stanford, California, United States)
What is the impact of human-computer interaction research on industry? While it is impossible to track all research impact pathways, the growing literature on translational research impact measurement offers patent citations as one measure of how industry recognizes and draws on research in its inventions. In this paper, we perform a large-scale measurement study primarily of 70,000 patent citations to premier HCI research venues, tracing how HCI research are cited in United States patents over the last 30 years. We observe that 20.1% of papers from these venues, including 60--80% of papers at UIST and 13% of papers in a broader dataset of SIGCHI-sponsored venues overall, are cited by patents -- far greater than premier venues in science overall (9.7%) and NLP (11%). However, the time lag between a patent and its paper citations is long (10.5 years) and getting longer, suggesting that HCI research and practice may not be efficiently connected.
6
WebUI: A Dataset for Enhancing Visual UI Understanding with Web Semantics
Jason Wu (Carnegie Mellon University, Pittsburgh, Pennsylvania, United States)Siyan Wang (Wellesley College, Wellesley, Massachusetts, United States)Siman Shen (Grinnell College, Grinnell, Iowa, United States)Yi-Hao Peng (Carnegie Mellon University, Pittsburgh, Pennsylvania, United States)Jeffrey Nichols (Snooty Bird LLC, San Diego, California, United States)Jeffrey P. Bigham (Carnegie Mellon University, Pittsburgh, Pennsylvania, United States)
Modeling user interfaces (UIs) from visual information allows systems to make inferences about the functionality and semantics needed to support use cases in accessibility, app automation, and testing. Current datasets for training machine learning models are limited in size due to the costly and time-consuming process of manually collecting and annotating UIs. We crawled the web to construct WebUI, a large dataset of 400,000 rendered web pages associated with automatically extracted metadata. We analyze the composition of WebUI and show that while automatically extracted data is noisy, most examples meet basic criteria for visual UI modeling. We applied several strategies for incorporating semantics found in web pages to increase the performance of visual UI understanding models in the mobile domain, where less labeled data is available: (i) element detection, (ii) screen classification and (iii) screen similarity.
6
Personalised Yet Impersonal: Listeners' Experiences Of Algorithmic Curation On Music Streaming Services
Sophie Freeman (The University of Melbourne, Melbourne, Australia)Martin Gibbs (The University of Melbourne, Melbourne, Victoria, Australia)Bjorn Nansen (University of Melbourne, Melbourne, Australia)
The consumption of music is increasingly reliant on the personalisation, recommendation, and automated curation features of music streaming services. Using algorithm experience (AX) as a lens, we investigated the user experience of the algorithmic recommendation and automated curation features of several popular music streaming services. We conducted interviews and participant-observation with 15 daily users of music streaming services, followed by a design workshop. We found that despite the utility of increasingly algorithmic personalisation, listeners experienced these algorithmic and recommendation features as impersonal in determining their background listening, music discovery, and playlist curation. While listener desire for more control over recommendation settings is not new, we offer a number of novel insights about music listening to nuance this understanding, particularly through the notion of vibe.
6
AnisoTag: 3D Printed Tag on 2D Surface via Reflection Anisotropy
Zehua Ma (University of Science and Technology of China, Hefei, China)Hang Zhou (Simon Fraser University, Burnaby, British Columbia, Canada)Weiming Zhang (University of Science and Technology of China, Hefei, China)
In the past few years, the widespread use of 3D printing technology enables the growth of the market of 3D printed products. On Esty, a website focused on handmade items, hundreds of individual entrepreneurs are selling their 3D printed products. Inspired by the positive effects of machine-readable tags, like barcodes, on daily product marketing, we propose AnisoTag, a novel tagging method to encode data on the 2D surface of 3D printed objects based on reflection anisotropy. AnisoTag has an unobtrusive appearance and much lower extraction computational complexity, contributing to a lightweight low-cost tagging system for individual entrepreneurs. On AnisoTag, data are encoded by the proposed tool as reflective anisotropic microstructures, which would reflect distinct illumination patterns when irradiating by collimated laser. Based on it, we implement a real-time detection prototype with inexpensive hardware to determine the reflected illumination pattern and decode data according to their mapping. We evaluate AnisoTag with various 3D printer brands, filaments, and printing parameters, demonstrating its superior usability, accessibility, and reliability for practical usage.
6
TactIcons: Designing 3D Printed Map Icons for People who are Blind or have Low Vision
Leona M. Holloway (Monash University, Melbourne, VIC, Australia)Matthew Butler (Monash University, Melbourne, Australia)Kim Marriott (Monash University, Melbourne, Australia)
Visual icons provide immediate recognition of features on print maps but do not translate well for touch reading by people who are blind or have low vision due to the low fidelity of tactile perception. We explored 3D printed icons as an equivalent to visual icons for tactile maps addressing these problems. We designed over 200 tactile icons (TactIcons) for street and park maps. These were touch tested by blind and sighted people, resulting in a corpus of 33 icons that can be recognised instantly and a further 34 icons that are easily learned. Importantly, this work has informed the creation of detailed guidelines for the design of TactIcons and a practical methodology for touch testing new TactIcons. It is hoped that this work will contribute to the creation of more inclusive, user-friendly tactile maps for people who are blind or have low vision.
6
No Pie in the (Digital) Sky: Co-Imagining the Food Metaverse
Alexandra Covaci (University of Kent, Canterbury, Kent, United Kingdom)Khawla Alhasan (University of Kent, Canterbury, Kent, United Kingdom)Mayank Loonker (University of Kent, Canterbury, United Kingdom)Bernardine Farrell (University of Kent, Canterbury, United Kingdom)Luma Tabbaa (University of Kent, Canterbury, Kent, United Kingdom)Sophia Ppali (University of Kent, Canterbury, United Kingdom)Chee Siang Ang (University of Kent, Canterbury, KENT, United Kingdom)
Human behaviour and habits co-evolve with technology, and the metaverse is poised to become a key player in reshaping how we live our everyday life. Given the importance of food in our daily lives, we ask: how will our relationships with food be transformed by the metaverse, and what are the promises and pitfalls of this technology? To answer this, we propose a co-design study that reveals the important elements people value in their daily interactions with food. We then present a speculative catalogue of novel metaverse food experiences, and insights from discussing these ideas with food designers, anthropologists and metaverse experts. Our work aims to provide designers with inspirations for building a metaverse that: provides inclusive opportunities for the future of food; helps re-discover the forgotten or lost knowledge about food; facilitates the exploration, excitement and joy of eating; and reinvigorates the ways that food can soothe and heal.
5
Corsetto: A Kinesthetic Garment for Designing, Composing for, and Experiencing an Intersubjective Haptic Voice
Ozgun Kilic Afsar (MIT, Cambridge, Massachusetts, United States)Yoav Luft (KTH Royal Institute of Technology, Stockholm, Sweden)Kelsey Cotton (Chalmers University of Technology, Sweden, Gothenburg, Sweden)Ekaterina R.. Stepanova (Simon Fraser University, Surrey, British Columbia, Canada)Claudia Núñez-Pacheco (KTH Royal Institute of Technology, Stockholm, Sweden)Rebecca Kleinberger (Northeastern University, Boston, Massachusetts, United States)Fehmi Ben Abdesslem (Computer Systems, Kista, Sweden)Hiroshi Ishii (MIT, Cambridge, Massachusetts, United States)Kristina Höök (KTH Royal Institute of Technology, Stockholm, Sweden)
We present a novel intercorporeal experience – an intersubjective haptic voice. Through an autobiographical design inquiry, based on singing techniques from the classical opera tradition, we created Corsetto, a kinesthetic garment for transferring somatic reminiscents of vocal experience from an expert singer to a listener. We then composed haptic gestures enacted in the Corsetto, emulating upper-body movements of the live singer performing a piece by Morton Feldman named Three Voices. The gestures in the Corsetto added a haptics-based ‘fourth voice’ to the immersive opera performance. Finally, we invited audiences who were asked to wear Corsetto during live performances. Afterwards they engaged in micro-phenomenological interviews. The analysis revealed how the Corsetto managed to bridge inner and outer bodily sensations, creating a feeling of a shared intercorporeal experience, dissolving boundaries between listener, singer and performance. We propose that ‘intersubjective haptics’ can be a generative medium not only for singing performances, but other possible intersubjective experiences.
5
Augmenting On-Body Touch Input with Tactile Feedback Through Fingernail Haptics
Peter Khoa Duc Tran (University of Calgary, Calgary, Alberta, Canada)Purna Valli Anusha Gadepalli (Saarland University, Saarbruecken, saarbrucken, Saarland, Germany)Jaeyeon Lee (UNIST, Ulsan, Korea, Republic of)Aditya Shekhar Nittala (University of Calgary, Calgary, Alberta, Canada)
The key assumption attributed to on-body touch input is that the skin being touched provides natural tactile feedback. In this paper, we for the first time systematically explore augmenting on-body touch input with computer-generated tactile feedback. We employ vibrotactile actuation on the fingernail to couple on-body touch input with tactile feedback. Results from our first experiment show that users prefer tactile feedback for on-body touch input. In our second experiment, we determine the frequency thresholds for rendering realistic tactile “click” sensations for on-body touch buttons on three different body locations. Finally, in our third experiment, we dig deeper to render highly expressive tactile effects with a single actuator. Our non-metric multi-dimensional analysis shows that haptic augmentation of on-body buttons enhances the expressivity of on-body touch input. Overall, results from our experiments reinforce the need for tactile feedback for on-body touch input and show that actuation on the fingernail is a promising approach.
5
The Tactile Dimension: A Method for Physicalizing Touch Behaviors
Laura J. Perovich (Northeastern University, Boston, Massachusetts, United States)Bernice Rogowitz (Visual Perspectives Research , Westchester, New York, United States)Victoria Crabb (Northeastern University, Boston, Massachusetts, United States)Jack Vogelsang (Northeastern University, Boston, Massachusetts, United States)Sara Hartleben (Northeastern University, Boston, Massachusetts, United States)Dietmar Offenhuber (Northeastern University, Boston, Massachusetts, United States)
Traces of touch provide valuable insight into how we interact with the physical world. Measuring touch behavior, however, is expensive and imprecise. Utilizing a fluorescent UV tracer powder, we developed a low-cost analog method to capture persistent, high-contrast touch records on arbitrary objects. We describe our process for selecting a tracer, methods for capturing, enhancing, and aggregating traces, and approaches to examining qualitative aspects of the user experience. Three user studies demonstrate key features of this method. First, we show that it provides clear and durable traces on objects representative of scientific visualization, physicalization, and product design. Second, we demonstrate how this method could be used to study touch perception, by measuring how task and narrative framing elicit different touch behaviors on the same object. Third, we demonstrate how this method can be used to evaluate data physicalizations by observing how participants touch two different physicalizations of COVID-19 time-series data.
5
Negotiating Experience and Communicating Information Through Abstract Metaphor
Courtney N. Reed (Max Planck Institute for Informatics, Saarland Informatics Campus, Saarbrücken, Germany)Paul Strohmeier (Max Planck Institute for Informatics, Saarland Informatics Campus, Saarbrücken, Germany)Andrew P. McPherson (Queen Mary University of London, London, United Kingdom)
An implicit assumption in metaphor use is that it requires grounding in a familiar concept, prominently seen in the popular Desktop Metaphor. In human-to-human communication, however, abstract metaphors, without such grounding, are often used with great success. To understand when and why metaphors work, we present a case study of metaphor use in voice teaching. Voice educators must teach about subjective, sensory experiences and rely on abstract metaphor to express information about unseen and intangible processes inside the body. We present a thematic analysis of metaphor use by 12 voice teachers. We found that metaphor works not because of strong grounding in the familiar, but because of its ambiguity and flexibility, allowing shared understanding between individual lived experiences. We summarise our findings in a model of metaphor-based communication. This model can be used as an analysis tool within the existing taxonomies of metaphor in user interaction for better understanding why metaphor works in HCI. It can also be used as a design resource for thinking about metaphor use and abstracting metaphor strategies from both novel and existing designs.
5
Evaluating Large Language Models in Generating Synthetic HCI Research Data: a Case Study
Perttu Hämäläinen (Aalto University, Espoo, Finland)Mikke Tavast (Aalto University, Espoo, Finland)Anton Kunnari (University of Helsinki, Helsinki, Finland)
Collecting data is one of the bottlenecks of Human-Computer Interaction (HCI) research. Motivated by this, we explore the potential of large language models (LLMs) in generating synthetic user research data. We use OpenAI’s GPT-3 model to generate open-ended questionnaire responses about experiencing video games as art, a topic not tractable with traditional computational user models. We test whether synthetic responses can be distinguished from real responses, analyze errors of synthetic data, and investigate content similarities between synthetic and real data. We conclude that GPT-3 can, in this context, yield believable accounts of HCI experiences. Given the low cost and high speed of LLM data generation, synthetic data should be useful in ideating and piloting new experiments, although any findings must obviously always be validated with real data. The results also raise concerns: if employed by malicious users of crowdsourcing services, LLMs may make crowdsourcing of self-report data fundamentally unreliable.
5
Are You Killing Time? Predicting Smartphone Users’ Time-killing Moments via Fusion of Smartphone Sensor Data and Screenshots
Yu-Chun Chen (National Yang Ming Chiao Tung University, Hsinchu, Taiwan)Yu-Jen Lee (National Yang Ming Chiao Tung University , Hsinchu, Taiwan)Kuei-Chun Kao (National Yang Ming Chiao Tung Univeristy, Hsinchu, Taiwan)Jie Tsai (National Yang Ming Chiao Tung University, Hsinchu, Taiwan)En-Chi Liang (National Yang Ming Chiao Tung University, Hsinchu,Taiwan, Taiwan)Wei-Chen Chiu (National Chiao Tung University, Hsinchu City, Taiwan)Faye Shih (Bryn Mawr College, Bryn Mawr, Pennsylvania, United States)Yung-Ju Chang (National Yang Ming Chiao Tung University, Hsinchu, Taiwan)
Time-killing on smartphones has become a pervasive activity, and could be opportune for delivering content to their users. This research is believed to be the first attempt at time-killing detection, which leverages the fusion of phone-sensor and screenshot data. We collected nearly one million user-annotated screenshots from 36 Android users. Using this dataset, we built a deep-learning fusion model, which achieved a precision of 0.83 and an AUROC of 0.72. We further employed a two-stage clustering approach to separate users into four groups according to the patterns of their phone-usage behaviors, and then built a fusion model for each group. The performance of the four models, though diverse, yielded better average precision of 0.87 and AUROC of 0.76, and was superior to that of the general/unified model shared among all users. We investigated and discussed the features of the four time-killing behavior clusters that explain why the models’ performance differ.
5
On the Design of AI-powered Code Assistants for Notebooks
Andrew M. McNutt (University of Chicago, Chicago, Illinois, United States)Chenglong Wang (Microsoft Research, Redmond, Washington, United States)Robert A. DeLine (Microsoft Corp, Redmond, Washington, United States)Steven M.. Drucker (Microsoft Research, Redmond, Washington, United States)
AI-powered code assistants, such as Copilot, are quickly becoming a ubiquitous component of contemporary coding contexts. Among these environments, computational notebooks, such as Jupyter, are of particular interest as they provide rich interface affordances that interleave code and output in a manner that allows for both exploratory and presentational work. Despite their popularity, little is known about the appropriate design of code assistants in notebooks. We investigate the potential of code assistants in computational notebooks by creating a design space (reified from a survey of extant tools) and through an interview-design study (with 15 practicing data scientists). Through this work, we identify challenges and opportunities for future systems in this space, such as the value of disambiguation for tasks like data visualization, the potential of tightly scoped domain-specific tools (like linters), and the importance of polite assistants.
5
Design, Mould, Grow!: A Fabrication Pipeline for Growing 3D Designs Using Myco-Materials
Phillip Gough (The University of Sydney, Sydney, NSW, Australia)Praneeth Bimsara. Perera (University of Sydney, Sydney, New South Wales, Australia)Michael A.. Kertesz (The University of Sydney, Sydney, Australia)Anusha Withana (The University of Sydney, Sydney, NSW, Australia)
There is a growing interest in sustainable fabrication approaches, including the exploration of material conservation and utilisation of waste materials. Particularly, recent work has applied organic myco-materials, made from fungi, to develop tangible, interactive devices. However, a systematic approach for 3D fabrication using myco-materials is under-explored. In this paper, we present a parametric design tool and a fabrication pipeline to grow 3D designs using the mycelia of edible fungi species, such as Reishi or Oyster mushrooms. The proposed tool is designed based on empirical results from a series of technical evaluations of the geometric and material qualities of 3D-grown myco-objects. Furthermore, the paper introduces an easy-to-replicate fabrication process that can recycle different organic waste material combinations such as sawdust and coffee grounds to grow mycelia. Through a series of demonstration applications, we identify the challenges and opportunities for working with myco-materials in the HCI context.
5
Notable: On-the-fly Assistant for Data Storytelling in Computational Notebooks
Haotian Li (The Hong Kong University of Science and Technology, Hong Kong, China)Lu Ying (Zhejiang University, Hangzhou, Zhejiang, China)Haidong Zhang (Microsoft Research Asia, Beijing, China)Yingcai Wu (Zhejiang University, Hangzhou, Zhejiang, China)Huamin Qu (The Hong Kong University of Science and Technology, Hong Kong, China)Yun Wang (Microsoft Research Asia, Beijing, China)
Computational notebooks are widely used for data analysis. Their interleaved displays of code and execution results (e.g., visualizations) are welcomed since they enable iterative analysis and preserve the exploration process. However, the communication of data findings remains challenging in computational notebooks. Users have to carefully identify useful findings from useless ones, document them with texts and visual embellishments, and then organize them in different tools. Such workflow greatly increases their workload, according to our interviews with practitioners. To address the challenge, we designed Notable to offer on-the-fly assistance for data storytelling in computational notebooks. It provides intelligent support to minimize the work of documenting and organizing data findings and diminishes the cost of switching between data exploration and storytelling. To evaluate Notable, we conducted a user study with 12 data workers. The feedback from user study participants verifies its effectiveness and usability.
5
It is Okay to be Distracted: How Real-time Transcriptions Facilitate Online Meeting with Distraction
Seoyun Son (KAIST, Daejeon, Korea, Republic of)Junyoung Choi (KAIST, Daejeon, Korea, Republic of)Sunjae Lee (KAIST, Daejeon, Korea, Republic of)Jean Y. Song (DGIST, Daegu, Korea, Republic of)Insik Shin (KAIST, Daejeon, Korea, Republic of)
Online meetings are indispensable in collaborative remote work environments, but they are vulnerable to distractions due to their distributed and location-agnostic nature. While distraction often leads to a decrease in online meeting quality due to loss of engagement and context, natural multitasking has positive tradeoff effects, such as increased productivity within a given time unit. In this study, we investigate the impact of real-time transcriptions (i.e., full-transcripts, summaries, and keywords) as a solution to help facilitate online meetings during distracting moments while still preserving multitasking behaviors. Through two rounds of controlled user studies, we qualitatively and quantitatively show that people can better catch up with the meeting flow and feel less interfered with when using real-time transcriptions. The benefits of real-time transcriptions were more pronounced after distracting activities. Furthermore, we reveal additional impacts of real-time transcriptions (e.g., supporting recalling contents) and suggest design implications for future online meeting platforms where these could be adaptively provided to users with different purposes.
5
Tailoring a Persuasive Game to Promote Secure Smartphone Behaviour
Anirudh Ganesh (Dalhousie University, Halifax, Nova Scotia, Canada)Chinenye Ndulue (Dalhousie University, Canada, Halifax, Nova Scotia, Canada)Rita Orji (Dalhousie University, Halifax, Nova Scotia, Canada)
The use of smartphones has become an integral part of everyone’s lives. Due to the ubiquitous nature and multiple functionalities of smartphones, the data handled by these devices are sensitive in nature. Despite the measures companies take to protect users’ data, research has shown that people do not take the necessary actions to stay safe from security and privacy threats. Persuasive games have been implemented across various domains to motivate people towards a positive behaviour change. Even though persuasive games could be effective, research has shown that the one-size-fits-all approach to designing persuasive games might not be as effective as the tailored versions of the game. This paper presents the design and evaluation of a persuasive game to improve user awareness about smartphone security and privacy tailored to the user’s motivational orientation using Regulatory Focus Theory. From the results of our mixed-methods in-the-wild study of 102 people followed by a one-on-one interview of 25 people, it is evident that the tailored version of the persuasive game performed better than the non-tailored version of the game towards improving users’ secure smartphone behaviour. We contribute to the broader HCI community by offering design suggestions and the benefits of tailoring persuasive games.
5
TicTacToes: Assessing Toe Movements as an Input Modality
Florian Müller (LMU Munich, Munich, Germany)Daniel Schmitt (TU Darmstadt, Darmstadt, Germany)Andrii Matviienko (Technical University of Darmstadt, Darmstadt, Germany)Dominik Schön (TU Darmstadt, Darmstadt, Germany)Sebastian Günther (Technical University of Darmstadt, Darmstadt, Germany)Thomas Kosch (HU Berlin, Berlin, Germany)Martin Schmitz (Saarland University, Saarbrücken, Germany)
From carrying grocery bags to holding onto handles on the bus, there are a variety of situations where one or both hands are busy, hindering the vision of ubiquitous interaction with technology. Voice commands, as a popular hands-free alternative, struggle with ambient noise and privacy issues. As an alternative approach, research explored movements of various body parts (e.g., head, arms) as input modalities, with foot-based techniques proving particularly suitable for hands-free interaction. Whereas previous research only considered the movement of the foot as a whole, in this work, we argue that our toes offer further degrees of freedom that can be leveraged for interaction. To explore the viability of toe-based interaction, we contribute the results of a controlled experiment with 18 participants assessing the impact of five factors on the accuracy, efficiency and user experience of such interfaces. Based on the findings, we provide design recommendations for future toe-based interfaces.
5
Accessible Data Representation with Natural Sound
Md Naimul Hoque (University of Maryland, College Park, Maryland, United States)Md Ehtesham-Ul-Haque (Pennsylvania State University, University Park, Pennsylvania, United States)Niklas Elmqvist (University of Maryland, College Park, College Park, Maryland, United States)Syed Masum Billah (Pennsylvania State University, University Park , Pennsylvania, United States)
Sonification translates data into non-speech audio. Such auditory representations can make data visualization accessible to people who are blind or have low vision (BLV). This paper presents a sonification method for translating common data visualization into a blend of natural sounds. We hypothesize that people's familiarity with sounds drawn from nature, such as birds singing in a forest, and their ability to listen to these sounds in parallel, will enable BLV users to perceive multiple data points being sonified at the same time. Informed by an extensive literature review and a preliminary study with 5 BLV participants, we designed an accessible data representation tool, Susurrus, that combines our sonification method with other accessibility features, such as keyboard interaction and text-to-speech feedback. Finally, we conducted a user study with 12 BLV participants and report the potential and application of natural sounds for sonification compared to existing sonification tools.
5
Subjective Probability Correction for Uncertainty Representations
Fumeng Yang (Northwestern University, Evanston, Illinois, United States)Maryam Hedayati (Northwestern University, Evanston, Illinois, United States)Matthew Kay (Northwestern University, Chicago, Illinois, United States)
We propose a new approach to uncertainty communication: we keep the uncertainty representation fixed, but adjust the distribution displayed to compensate for biases in people’s subjective probability in decision-making. To do so, we adopt a linear-in-probit model of subjective probability and derive two corrections to a Normal distribution based on the model’s intercept and slope: one correcting all right-tailed probabilities, and the other preserving the mode and one focal probability. We then conduct two experiments on U.S. demographically-representative samples. We show participants hypothetical U.S. Senate election forecasts as text or a histogram and elicit their subjective probabilities using a betting task. The first experiment estimates the linear-in-probit intercepts and slopes, and confirms the biases in participants’ subjective probabilities. The second, preregistered follow-up shows participants the bias-corrected forecast distributions. We find the corrections substantially improve participants’ decision quality by reducing the integrated absolute error of their subjective probabilities compared to the true probabilities. These corrections can be generalized to any univariate probability or confidence distribution, giving them broad applicability. Our preprint, code, data, and preregistration are available at https://doi.org/10.17605/osf.io/kcwxm.