注目の論文一覧

各カテゴリ上位30論文までを表示しています

The ACM CHI Conference on Human Factors in Computing Systems (https://chi2024.acm.org/)

11
Tagnoo: Enabling Smart Room-Scale Environments with RFID-Augmented Plywood
Yuning Su (Simon Fraser University, Burnaby, British Columbia, Canada)Tingyu Zhang (Simon Fraser University, Burnaby, British Columbia, Canada)Jiuen Feng (University of Science and Technology of China, Hefei, Anhui, China)Yonghao Shi (Simon Fraser University, Burnaby, British Columbia, Canada)Xing-Dong Yang (Simon Fraser University, Burnaby, British Columbia, Canada)Te-Yen Wu (Florida State University, Tallahassee, Florida, United States)
Tagnoo is a computational plywood augmented with RFID tags, aimed at empowering woodworkers to effortlessly create room-scale smart environments. Unlike existing solutions, Tagnoo does not necessitate technical expertise or disrupt established woodworking routines. This battery-free and cost-effective solution seamlessly integrates computation capabilities into plywood, while preserving its original appearance and functionality. In this paper, we explore various parameters that can influence Tagnoo's sensing performance and woodworking compatibility through a series of experiments. Additionally, we demonstrate the construction of a small office environment, comprising a desk, chair, shelf, and floor, all crafted by an experienced woodworker using conventional tools such as a table saw and screws while adhering to established construction workflows. Our evaluation confirms that the smart environment can accurately recognize 18 daily objects and user activities, such as a user sitting on the floor or a glass lunchbox placed on the desk, with over 90% accuracy.
11
TypeDance: Creating Semantic Typographic Logos from Image through Personalized Generation
Shishi Xiao (The Hong Kong University of Science and Technology(Guangzhou), Guangzhou, China)Liangwei Wang (The Hong Kong University of Science and Technology (Guangzhou), Guangzhou, China)Xiaojuan Ma (Hong Kong University of Science and Technology, Hong Kong, Hong Kong)Wei Zeng (The Hong Kong University of Science and Technology (Guangzhou), Guangzhou, Guangdong, China)
Semantic typographic logos harmoniously blend typeface and imagery to represent semantic concepts while maintaining legibility. Conventional methods using spatial composition and shape substitution are hindered by the conflicting requirement for achieving seamless spatial fusion between geometrically dissimilar typefaces and semantics. While recent advances made AI generation of semantic typography possible, the end-to-end approaches exclude designer involvement and disregard personalized design. This paper presents TypeDance, an AI-assisted tool incorporating design rationales with the generative model for personalized semantic typographic logo design. It leverages combinable design priors extracted from uploaded image exemplars and supports type-imagery mapping at various structural granularity, achieving diverse aesthetic designs with flexible control. Additionally, we instantiate a comprehensive design workflow in TypeDance, including ideation, selection, generation, evaluation, and iteration. A two-task user evaluation, including imitation and creation, confirmed the usability of TypeDance in design across different usage scenarios.
10
SplitBody: Reducing Mental Workload while Multitasking via Muscle Stimulation
Romain Nith (University of Chicago, Chicago, Illinois, United States)Yun Ho (University of Chicago, Chicago, Illinois, United States)Pedro Lopes (University of Chicago, Chicago, Illinois, United States)
Techniques like electrical muscle stimulation (EMS) offer promise in assisting physical tasks by automating movements, e.g., shaking a spray-can or tapping a button. However, existing actuation systems improve the performance of a task that users are already focusing on (e.g., users are already focused on using the spray-can). Instead, we investigate whether these interactive-actuation systems (e.g., EMS) offer any benefits if they automate a task that happens in the background of the user's focus. Thus, we explored whether automating a repetitive movement via EMS would reduce mental workload while users perform parallel tasks (e.g., focusing on writing an essay while EMS stirs a pot of soup). In our study, participants performed a cognitively-demanding multitask aided by EMS (SplitBody condition) or performed by themselves (baseline). We found that with SplitBody performance increased (35% on both tasks, 18% on the non-EMS-automated task), physical-demand decreased (31%), and mental-workload decreased (26%).
10
RELIC: Investigating Large Language Model Responses using Self-Consistency
Furui Cheng (ETH Zürich, Zürich, Switzerland)Vilém Zouhar (ETH Zurich, Zurich, Switzerland)Simran Arora (Stanford University, Stanford, California, United States)Mrinmaya Sachan (ETH Zurich, Zurich, Switzerland)Hendrik Strobelt (IBM Research AI, Cambridge, Massachusetts, United States)Mennatallah El-Assady (ETH Zürich, Zürich, Switzerland)
Large Language Models (LLMs) are notorious for blending fact with fiction and generating non-factual content, known as hallucinations. To address this challenge, we propose an interactive system that helps users gain insight into the reliability of the generated text. Our approach is based on the idea that the self-consistency of multiple samples generated by the same LLM relates to its confidence in individual claims in the generated texts. Using this idea, we design RELIC, an interactive system that enables users to investigate and verify semantic-level variations in multiple long-form responses. This allows users to recognize potentially inaccurate information in the generated text and make necessary corrections. From a user study with ten participants, we demonstrate that our approach helps users better verify the reliability of the generated text. We further summarize the design implications and lessons learned from this research for future studies of reliable human-LLM interactions.
10
Using the Visual Language of Comics to Alter Sensations in Augmented Reality
Arpit Bhatia (University of Copenhagen, Copenhagen, Denmark)Henning Pohl (Aalborg University, Aalborg, Denmark)Teresa Hirzle (University of Copenhagen, Copenhagen, Denmark)Hasti Seifi (Arizona State University, Tempe, Arizona, United States)Kasper Hornbæk (University of Copenhagen, Copenhagen, Denmark)
Augmented Reality (AR) excels at altering what we see but non-visual sensations are difficult to augment. To augment non-visual sensations in AR, we draw on the visual language of comic books. Synthesizing comic studies, we create a design space describing how to use comic elements (e.g., onomatopoeia) to depict non-visual sensations (e.g., hearing). To demonstrate this design space, we built eight demos, such as speed lines to make a user think they are faster and smell lines to make a scent seem stronger. We evaluate these elements in a qualitative user study (N=20) where participants performed everyday tasks with comic elements added as augmentations. All participants stated feeling a change in perception for at least one sensation, with perceived changes detected by between four participants (touch) and 15 participants (hearing). The elements also had positive effects on emotion and user experience, even when participants did not feel changes in perception.
9
ARCADIA: A Gamified Mixed Reality System for Emotional Regulation and Self-Compassion
José Luis Soler-Domínguez (Instituto Tecnológico de Informática, Valencia, Spain)Samuel Navas-Medrano (Instituto Tecnológico de Informática, Valencia, Spain)Patricia Pons (Instituto Tecnológico de Informática, Valencia, Spain)
Mental health and wellbeing have become one of the significant challenges in global society, for which emotional regulation strategies hold the potential to offer a transversal approach to addressing them. However, the persistently declining adherence of patients to therapeutic interventions, coupled with the limited applicability of current technological interventions across diverse individuals and diagnoses, underscores the need for innovative solutions. We present ARCADIA, a Mixed-Reality platform strategically co-designed with therapists to enhance emotional regulation and self-compassion. ARCADIA comprises several gamified therapeutic activities, with a strong emphasis on fostering patient motivation. Through a dual study involving therapists and mental health patients, we validate the fully functional prototype of ARCADIA. Encouraging results are observed in terms of system usability, user engagement, and therapeutic potential. These findings lead us to believe that the combination of Mixed Reality and gamified therapeutic activities could be a significant tool in the future of mental health.
9
E-Acrylic: Electronic-Acrylic Composites for Making Interactive Artifacts
Bo Han (National University of Singapore, Singapore, Singapore)Xin Liu (National University of Singapore, Singapore, Singapore)Ching Chiuan Yen (National University of Singapore, Singapore, Singapore, Singapore)Clement Zheng (National University of Singapore, Singapore, Singapore)
Electronic composites incorporate computing into physical materials, expanding the materiality of interactive systems for designers. In this research, we investigated acrylic as a substrate for electronics. Acrylic is valued for its visual and structural properties and is used widely in industrial design. We propose e-acrylic, an electronic composite that incorporates electronic circuits with acrylic sheets. Our approach to making this composite is centered on acrylic making practices that industrial designers are familiar with. We outline this approach systematically, including leveraging laser cutting to embed circuits into acrylic sheets, as well as different ways to shape e-acrylic into 3D objects. With this approach, we explored using e-acrylic to design interactive artifacts. We reflect on these applications to surface a design space of tangible interactive artifacts possible with this composite. We also discuss the implications of aligning electronics to an existing making practice, and working with the holistic materiality that e-acrylic embodies.
9
Comfortable Mobility vs. Attractive Scenery: The Key to Augmenting Narrative Worlds in Outdoor Locative Augmented Reality Storytelling
HYERIM PARK (KAIST, Daejeon,, Korea, Republic of)Aram Min (Technical Research Institute, Hanmac Engineering, Seoul, Korea, Republic of)Hyunjin Lee (KAIST, DAEJEON, Korea, Republic of)Maryam Shakeri (K.N. Toosi University of Technology, Tehran, Iran, Islamic Republic of)Ikbeom Jeon (KAIST, Daejeon, Korea, Republic of)Woontack Woo (KAIST , Daejeon, Korea, Republic of)
We investigate how path context, encompassing both comfort and attractiveness, shapes user experiences in outdoor locative storytelling using Augmented Reality (AR). Addressing a research gap that predominantly concentrates on indoor settings or narrative backdrops, our user-focused research delves into the interplay between perceived path context and locative AR storytelling on routes with diverse walkability levels. We examine the correlation and causation between narrative engagement, spatial presence, perceived workload, and perceived path context. Our findings show that on paths with reasonable path walkability, attractive elements positively influence the narrative experience. However, even in environments with assured narrative walkability, inappropriate safety elements can divert user attention to mobility, hindering the integration of real-world features into the narrative. These results carry significant implications for path creation in outdoor locative AR storytelling, underscoring the importance of ensuring comfort and maintaining a balance between comfort and attractiveness to enrich the outdoor AR storytelling experience.
9
EmoWear: Exploring Emotional Teasers for Voice Message Interaction on Smartwatches
Pengcheng An (Southern University of Science and Technology, Shenzhen, China)Jiawen Stefanie. Zhu (University of Waterloo, Waterloo, Ontario, Canada)Zibo Zhang (University of Waterloo, Waterloo, Ontario, Canada)Yifei Yin (University of Toronto Scarborough, Scarborough, Ontario, Canada)Qingyuan Ma (Chalmers University of Technology, Gothenburg, Sweden)Che Yan (Huawei Canada, Markham, Ontario, Canada)Linghao Du (Huawei, Markham, Ontario, Canada)Jian Zhao (University of Waterloo, Waterloo, Ontario, Canada)
Voice messages, by nature, prevent users from gauging the emotional tone without fully diving into the audio content. This hinders the shared emotional experience at the pre-retrieval stage. Research scarcely explored "Emotional Teasers"—pre-retrieval cues offering a glimpse into an awaiting message's emotional tone without disclosing its content. We introduce EmoWear, a smartwatch voice messaging system enabling users to apply 30 animation teasers on message bubbles to reflect emotions. EmoWear eases senders' choice by prioritizing emotions based on semantic and acoustic processing. EmoWear was evaluated in comparison with a mirroring system using color-coded message bubbles as emotional cues (N=24). Results showed EmoWear significantly enhanced emotional communication experience in both receiving and sending messages. The animated teasers were considered intuitive and valued for diverse expressions. Desirable interaction qualities and practical implications are distilled for future design. We thereby contribute both a novel system and empirical knowledge concerning emotional teasers for voice messaging.
8
"I Am So Overwhelmed I Don't Know Where to Begin!" Towards Developing Relationship-Based and Values-Based End-of-Life Data Planning Approaches
Dylan Thomas. Doyle (University of Colorado Boulder, Boulder, Colorado, United States)Jed R.. Brubaker (University of Colorado Boulder, Boulder, Colorado, United States)
To support people at the end of life as they create management plans for their assets, planning approaches like estate planning are increasingly considering data. HCI scholarship has argued that developing more effective planning approaches to support end-of-life data planning is important. However, empirical research is needed to evaluate specific approaches and identify design considerations. To support end-of-life data planning, this paper presents a qualitative study evaluating two approaches to co-designing end-of-life data plans with participants. We find that asset-first inventory-centric approaches, common in material estate planning, may be ineffective when making plans for data. In contrast, heavily facilitated, mission-driven, relationship-centric approaches were more effective. This study expands previous research by validating the importance of starting end-of-life data planning with relationships and values, and highlights collaborative facilitation as a critical part of successful data planning approaches.
8
MoiréWidgets: High-Precision, Passive Tangible Interfaces via Moiré Effect
Daniel Campos Zamora (University of Washington, Seattle, Washington, United States)Mustafa Doga Dogan (MIT CSAIL, Cambridge, Massachusetts, United States)Alexa Siu (Adobe Research, San Jose, California, United States)Eunyee Koh (Adobe Research, San Jose, California, United States)Chang Xiao (Adobe Research, San Jose, California, United States)
We introduce MoiréWidgets, a novel approach for tangible interaction that harnesses the Moiré effect—a prevalent optical phenomenon—to enable high-precision event detection on physical widgets. Unlike other electronics-free tangible user interfaces which require close coupling with external hardware, MoiréWidgets can be used at greater distances while maintaining high-resolution sensing of interactions. We define a set of interaction primitives, e.g., buttons, sliders, and dials, which can be used as standalone objects or combined to build complex physical controls. These consist of 3D printed structural mechanisms with patterns printed on two layers—one on paper and the other on a plastic transparency sheet—which create a visual signal that amplifies subtle movements, enabling the detection of user inputs. Our technical evaluation shows that our method outperforms standard fiducial markers and maintains sub-millimeter accuracy at 100 cm distance and wide viewing angles. We demonstrate our approach by creating an audio console and indicate how our approach could extend to other domains.
8
Waiting Time Perceptions for Faster Count-downs/ups Are More Sensitive Than Slower Ones: Experimental Investigation and Its Application
Takanori Komatsu (Meiji University, Tokyo, Japan)Chenxi Xie (Meiji University, Japan, Tokyo, Japan)Seiji Yamada (National Institute of Informatics, Tokyo, Japan)
Countdowns and count-ups are very useful displays that explicitly show how long users should wait and also show the current processing states of a given task. Most countdowns or count-ups decrease or increase their digit every one second exactly, and most users have an implicit assumption that the digit changes every one second exactly. However, there are no studies that investigate how users perceive wait times with these countdowns and count-ups and that consider changing users' perception of time passing as shorter than the actual passage of time by means of countdowns and count-ups while taking into account such user assumptions. To clarify these issues, we first investigated how users perceive countdowns "from 3/5/10 to 0" and count-ups "from 0 to 3/5/10" that have different lengths of intervals from 800 to 1200 msec (Experiment 1). Next, on the basis of the results of Experiment 1, we explored a novel method for presenting countdowns to make users perceive the wait time as being shorter than the actual wait time (Experiment 2) and investigated whether such countdowns can be used in realistic applications or not (Experiment 3). As a result, we found that countdowns and count-ups that were "from 250 msec shorter to 10% longer" than 3, 5, or 10 sec were perceived as 3, 5, or 10 sec, respectively, and those "from 5 to 0" (their lengths were 5 sec) that first displayed extremely shorter intervals were perceived as being shorter than their actual length (5 sec). Finally, we confirmed the applicability and effectiveness of such displays in a realistic application. Thus, we strongly argue that these findings could become indispensable knowledge for researchers in this research field to reduce users' cognitive load during wait times.
8
Selenite: Scaffolding Online Sensemaking with Comprehensive Overviews Elicited from Large Language Models
Michael Xieyang Liu (Carnegie Mellon University, Pittsburgh, Pennsylvania, United States)Tongshuang Wu (Carnegie Mellon University, Pittsburgh, Pennsylvania, United States)Tianying Chen (Carnegie Mellon University, Pittsburgh, Pennsylvania, United States)Franklin Mingzhe Li (Carnegie Mellon University, Pittsburgh, Pennsylvania, United States)Aniket Kittur (Carnegie Mellon University, Pittsburgh, Pennsylvania, United States)Brad A. Myers (Carnegie Mellon University, Pittsburgh, Pennsylvania, United States)
Sensemaking in unfamiliar domains can be challenging, demanding considerable user effort to compare different options with respect to various criteria. Prior research and our formative study found that people would benefit from reading an overview of an information space upfront, including the criteria others previously found useful. However, existing sensemaking tools struggle with the "cold-start" problem -- not only requiring significant input from previous users to generate and share these overviews, but also that such overviews may turn out to be biased and incomplete. In this work, we introduce a novel system, Selenite, which leverages Large Language Models (LLMs) as reasoning machines and knowledge retrievers to automatically produce a comprehensive overview of options and criteria to jumpstart users' sensemaking processes. Subsequently, Selenite also adapts as people use it, helping users find, read, and navigate unfamiliar information in a systematic yet personalized manner. Through three studies, we found that Selenite produced accurate and high-quality overviews reliably, significantly accelerated users' information processing, and effectively improved their overall comprehension and sensemaking experience.
8
Emotion Embodied: Unveiling the Expressive Potential of Single-Hand Gestures
Yuhan Luo (City University of Hong Kong, Hong Kong, China)Junnan Yu (The Hong Kong Polytechnic University, Hong Kong, China)Minhui Liang (City University of Hong Kong, Hong Kong, China)Yichen Wan (Hong Kong Polytechnic University, Hong Kong, Hong Kong)Kening Zhu (City University of Hong Kong, HongKong, China)Shannon Sie. Santosa (City University of Hong Kong, Kowloon Tong, Hong Kong)
Hand gestures are widely used in daily life for expressing emotions, yet gesture input is not part of existing emotion tracking systems. To seek a practical and effortless way of using gestures to inform emotions, we explore the relationships between gestural features and commonly experienced emotions by focusing on single-hand gestures that are easy to perform and capture. First, we collected 756 gestures (in photo and video pairs) from 63 participants who expressed different emotions in a survey, and then interviewed 11 of them to understand their gesture-forming rationales. We found that the valence and arousal level of the expressed emotions significantly correlated with participants' finger-pointing direction and their gesture strength, and synthesized four channels through which participants externalized their expressions with gestures. Reflecting on the findings, we discuss how emotions can be characterized and contextualized with gestural cues and implications for designing multimodal emotion tracking systems and beyond.
8
Visual Noise Cancellation: Exploring Visual Discomfort and Opportunities for Vision Augmentations
Junlei Hong (University of Otago, Dunedin, New Zealand)Tobias Langlotz (University of Otago, Dunedin, New Zealand)Jonathan Sutton (University of Otago, Dunedin, New Zealand)Holger Regenbrecht (University of Otago, Dunedin, Otago, New Zealand)
Acoustic noise control or cancellation (ANC) is a commonplace component of modern audio headphones. ANC aims to actively mitigate disturbing environmental noise for a quieter and improved listening experience. ANC is digitally controlling frequency and amplitude characteristics of sound. Much less explored is visual noise and active visual noise control, which we address here. We first explore visual noise and scenarios in which visual noise arises based on findings from four workshops we conducted. We then introduce the concept of visual noise cancellation (VNC) and how it can be used to reduce identified effects of visual noise. In addition, we developed head-worn demonstration prototypes to practically explore the concept of active VNC with selected scenarios in a user study. Finally, we discuss the application of VNC, including vision augmentations that moderate the user's view of the environment to address perceptual needs and to provide augmented reality content.
8
MOSion: Gaze Guidance with Motion-triggered Visual Cues by Mosaic Patterns
Arisa Kohtani (Tokyo Institute of Technology, Tokyo, Japan)Shio Miyafuji (Tokyo Institute of Technology, Tokyo, Japan)Keishiro Uragaki (Aoyama Gakuin University, Tokyo, Japan)Hidetaka Katsuyama (Tokyo Institute of Technology, Tokyo, Japan)Hideki Koike (Tokyo Institute of Technology, Tokyo, Japan)
We propose a gaze-guiding method called MOSion to adjust the guiding strength reacted to observers’ motion based on a high-speed projector and the afterimage effect in the human vision system. Our method decomposes the target area into mosaic patterns to embed visual cues in the perceived images. The patterns can only direct the attention of the moving observers to the target area. The stopping observer can see the original image with little distortion because of light integration in the visual perception. The pre computation of the patterns provides the adaptive guiding effect without tracking devices and computational costs depending on the movements. The evaluation and the user study show that the mosaic decomposition enhances the perceived saliency with a few visual artifacts, especially in moving conditions. Our method embedded in white lights works in various situations such as planar posters, advertisements, and curved objects.
8
Synlogue with Aizuchi-bot: Investigating the Co-Adaptive and Open-Ended Interaction Paradigm
Kazumi Yoshimura (Waseda University, Sinjuku-ku, Tokyo, Japan)Dominique Chen (Waseda University, Shinjuku-ku, Tokyo, Japan)Olaf Witkowski (Crosslabs, Kyoto, Japan)
In contrast to dialogue, wherein the exchange of completed messages occurs through turn-taking, synlogue is a mode of conversation characterized by co-creative processes, such as mutually complementing incomplete utterances and cooperative overlaps of backchannelings. Such co-creative conversations have the potential to alleviate social divisions in contemporary information environments. This study proposed the design concept of a synlogue based on literature in linguistics and anthropology and explored features that facilitate synlogic interactions in computer-mediated interfaces. Through an experiment, we focused on aizuchi, an important backchanneling element that drives synlogic conversation, and compared the speech and perceptual changes of participants when a bot dynamically uttered aizuchi or otherwise silent in a situation simulating an online video call. Consequently, we discussed the implications for interaction design based on our qualitative and quantitative analysis of the experiment. The synlogic perspective presented in this study is expected to facilitate HCI researchers to achieve more convivial forms of communication.
8
Constrained Highlighting in a Document Reader can Improve Reading Comprehension
Nikhita Joshi (University of Waterloo, Waterloo, Ontario, Canada)Daniel Vogel (University of Waterloo, Waterloo, Ontario, Canada)
Highlighting text in a document is a common active reading strategy to remember information from documents. Learning theory suggests that for highlights to be effective, readers must be selective with what they choose to highlight. We investigate if an imposed user interface constraint limiting the number of highlighted words in a document reader can improve reading comprehension. A large-scale between-subjects experiment shows that constraining the number of words that can be highlighted leads to higher reading comprehension scores than highlighting nothing or highlighting an unlimited number of words. Our work empirically validates theories in psychology, which in turn enables several new research directions within HCI.
7
MAF: Exploring Mobile Acoustic Field for Hand-to-Face Gesture Interactions
Yongjie Yang (University of Pittsburgh, Pittsburgh, Pennsylvania, United States)Tao Chen (University of Pittsburgh, Pittsburgh, Pennsylvania, United States)Yujing Huang (University of Pittsburgh, Pittsburgh, Pennsylvania, United States)Xiuzhen Guo (Zhejiang University, Hangzhou, China)Longfei Shangguan (University of Pittsburgh, Pittsburgh, Pennsylvania, United States)
We present MAF, a novel acoustic sensing approach that leverages the commodity hardware in bone conduction earphones for hand-to-face gesture interactions. Briefly, by shining audio signals with bone conduction earphones, we observe that these signals not only propagate along the surface of the human face but also dissipate into the air, creating an acoustic field that envelops the individual’s head. We conduct benchmark studies to understand how various hand-to-face gestures and human factors influence this acoustic field. Building on the insights gained from these initial studies, we then propose a deep neural network combined with signal preprocessing techniques. This combination empowers MAF to effectively detect, segment, and subsequently recognize a variety of hand-to-face gestures, whether in close contact with the face or above it. Our comprehensive evaluation based on 22 participants demonstrates that MAF achieves an average gesture recognition accuracy of 92% across ten different gestures tailored to users' preferences.
7
Quantifying Wrist-Aiming Habits with A Dual-Sensor Mouse: Implications for Player Performance and Workload
Donghyeon Kang (YONSEI University, Seoul, Korea, Republic of)Namsub Kim (Yonsei University, Seoul, Korea, Republic of)Daekaun Kang (Yonsei University, Seoul, Korea, Republic of)June-Seop Yoon (Yonsei University, Seoul, Korea, Republic of)Sunjun Kim (Daegu Gyeongbuk Institute of Science and Technology (DGIST), Daegu, Korea, Republic of)Byungjoo Lee (Department of Computer Science, Yonsei University, Seoul, Republic of Korea, Korea, Republic of)
Computer mice are widely used today as the primary input device in competitive video games. If a player exhibits more wrist rotation than other players when moving the mouse laterally, the player is said to have stronger wrist-aiming habits. Despite strong public interest, there has been no affordable technique to quantify the extent of a player's wrist-aiming habits and no scientific investigation into how the habits affect player performance and workload. We present a reliable and affordable technique to quantify the extent of a player's wrist-aiming habits using a mouse equipped with two optical sensors (i.e., a dual-sensor mouse). In two user studies, we demonstrate the reliability of the technique and examine the relationship between wrist-aiming habits and player performance or workload. In summary, player expertise and mouse sensitivity significantly impacted wrist-aiming habits; the extent of wrist-aiming showed a positive correlation with upper limb workload.
7
AudioXtend: Assisted Reality Visual Accompaniments for Audiobook Storytelling During Everyday Routine Tasks
Felicia Fang-Yi. Tan (National University of Singapore, Singapore, Singapore)Peisen Xu (National University of Singapore, Singapore, Singapore)Ashwin Ram (National University of Singapore, Singapore, Singapore)Wei Zhen Suen (National University of Singapore , Singapore, Singapore)Shengdong Zhao (National University of Singapore, Singapore, Singapore)Yun Huang (University of Illinois at Urbana-Champaign, Champaign, Illinois, United States)Christophe Hurter (Université de Toulouse, Toulouse, France)
The rise of multitasking in contemporary lifestyles has positioned audio-first content as an essential medium for information consumption. We present AudioXtend, an approach to augment audiobook experiences during daily tasks by integrating glanceable, AI-generated visuals through optical see-through head-mounted displays (OHMDs). Our initial study showed that these visual augmentations not only preserved users' primary task efficiency but also dramatically enhanced immediate auditory content recall by 33.3% and 7-day recall by 32.7%, alongside a marked improvement in narrative engagement. Through participatory design workshops involving digital arts designers, we crafted a set of design principles for visual augmentations that are attuned to the requirements of multitaskers. Finally, a 3-day take-home field study further revealed new insights for everyday use, underscoring the potential of assisted reality (aR) to enhance heads-up listening and incidental learning experiences.
7
Apple’s Knowledge Navigator: Why Doesn’t that Conversational Agent Exist Yet?
Amanda K.. Newendorp (Iowa State University, Ames, Iowa, United States)Mohammadamin Sanaei (Iowa State University, Ames, Iowa, United States)Arthur J. Perron (Iowa State University, Ames, Iowa, United States)Hila Sabouni (Iowa State University, Ames, Iowa, United States)Nikoo Javadpour (Iowa State University , AMES, Iowa, United States)Maddie Sells (Iowa State University , Ames, Iowa, United States)Katherine Nelson (Iowa State University, Ames, Iowa, United States)Michael Dorneich (Iowa State University, Ames, IA, Iowa, United States)Stephen B.. Gilbert (Iowa State University, Ames, Iowa, United States)
Apple’s 1987 Knowledge Navigator video contains a vision of a sophisticated digital personal assistant, but the natural human-agent conversational dialog shown does not currently exist. To investigate why, the authors analyzed the video using three theoretical frameworks: the DiCoT framework, the HAT Game Analysis framework, and the Flows of Power framework. These were used to codify the human-agent interactions and classify the agent’s capabilities. While some barriers to creating such agents are technological, other barriers arise from privacy, social and situational factors, trust, and the financial business case. The social roles and asymmetric interactions of the human and agent are discussed in the broader context of HAT research, along with the need for a new term for these agents that does not rely on a human social relationship metaphor. This research offers designers of conversational agents a research roadmap to build more highly capable and trusted non-human teammates.
7
MARingBA: Music-Adaptive Ringtones for Blended Audio Notification Delivery
Alexander Wang (Carnegie Mellon University, Pittsburgh, Pennsylvania, United States)Yi Fei Cheng (Carnegie Mellon University, Pittsburgh, Pennsylvania, United States)David Lindlbauer (Carnegie Mellon University, Pittsburgh, Pennsylvania, United States)
Audio notifications provide users with an efficient way to access information beyond their current focus of attention. Current notification delivery methods, like phone ringtones, are primarily optimized for high noticeability, enhancing situational awareness in some scenarios but causing disruption and annoyance in others. In this work, we build on the observation that music listening is now a commonplace practice and present MARingBA, a novel approach that blends ringtones into background music to modulate their noticeability. We contribute a design space exploration of music-adaptive manipulation parameters, including beat matching, key matching, and timbre modifications, to tailor ringtones to different songs. Through two studies, we demonstrate that MARingBA supports content creators in authoring audio notifications that fit low, medium, and high levels of urgency and noticeability. Additionally, end users prefer music-adaptive audio notifications over conventional delivery methods, such as volume fading.
7
Sweating the Details: Emotion Recognition and the Influence of Physical Exertion in Virtual Reality Exergaming
Dominic Potts (University of Bath, Bath, United Kingdom)Zoe Broad (University of Bath, Bath, United Kingdom)Tarini Sehgal (University of Bath , Bath, United Kingdom)Joseph Hartley (University of Bath, Bath, United Kingdom)Eamonn O'Neill (University of Bath, Bath, United Kingdom)Crescent Jicol (University of Bath, Bath, United Kingdom)Christopher Clarke (University of Bath, Bath, United Kingdom)Christof Lutteroth (University of Bath, Bath, United Kingdom)
There is great potential for adapting Virtual Reality (VR) exergames based on a user's affective state. However, physical activity and VR interfere with physiological sensors, making affect recognition challenging. We conducted a study (n=72) in which users experienced four emotion inducing VR exergaming environments (happiness, sadness, stress and calmness) at three different levels of exertion (low, medium, high). We collected physiological measures through pupillometry, electrodermal activity, heart rate, and facial tracking, as well as subjective affect ratings. Our validated virtual environments, data, and analyses are openly available. We found that the level of exertion influences the way affect can be recognised, as well as affect itself. Furthermore, our results highlight the importance of data cleaning to account for environmental and interpersonal factors interfering with physiological measures. The results shed light on the relationships between physiological measures and affective states and inform design choices about sensors and data cleaning approaches for affective VR.
7
Explanations, Fairness, and Appropriate Reliance in Human-AI Decision-Making
Jakob Schoeffer (University of Texas at Austin, Austin, Texas, United States)Maria De-Arteaga (The University of Texas at Austin, Austin, Texas, United States)Niklas Kühl (University of Bayreuth, Bayreuth, Germany)
In this work, we study the effects of feature-based explanations on distributive fairness of AI-assisted decisions, specifically focusing on the task of predicting occupations from short textual bios. We also investigate how any effects are mediated by humans' fairness perceptions and their reliance on AI recommendations. Our findings show that explanations influence fairness perceptions, which, in turn, relate to humans' tendency to adhere to AI recommendations. However, we see that such explanations do not enable humans to discern correct and incorrect AI recommendations. Instead, we show that they may affect reliance irrespective of the correctness of AI recommendations. Depending on which features an explanation highlights, this can foster or hinder distributive fairness: when explanations highlight features that are task-irrelevant and evidently associated with the sensitive attribute, this prompts overrides that counter AI recommendations that align with gender stereotypes. Meanwhile, if explanations appear task-relevant, this induces reliance behavior that reinforces stereotype-aligned errors. These results imply that feature-based explanations are not a reliable mechanism to improve distributive fairness.
7
Using Low-frequency Sound to Create Non-contact Sensations On and In the Body
Waseem Hassan (University of Copenhagen, Copenhagen, Denmark)Asier Marzo (Universidad Publica de Navarra, Pamplona, Navarre, Spain)Kasper Hornbæk (University of Copenhagen, Copenhagen, Denmark)
This paper proposes a method for generating non-contact sensations using low-frequency sound waves without requiring user instrumentation. This method leverages the fundamental acoustic response of a confined space to produce predictable pressure spatial distributions at low frequencies, called modes. These modes can be used to produce sensations either throughout the body, in localized areas of the body, or within the body. We first validate the location and strength of the modes simulated by acoustic modeling. Next, a perceptual study is conducted to show how different frequencies produce qualitatively different sensations across and within the participants' bodies. The low-frequency sound offers a new way of delivering non-contact sensations throughout the body. The results indicate a high accuracy for predicting sensations at specific body locations.
7
Outplay Your Weaker Self: A Mixed-Methods Study on Gamification to Overcome Procrastination in Academia
Jeanine Kirchner-Krath (Friedrich-Alexander-Universität Erlangen-Nuremberg, Nuremberg, Germany)Manuel Schmidt-Kraepelin (Institute of Applied Informatics and Formal Description Methods, Karlsruhe, Germany)Sofia Schöbel (Information Systems, Osnabrück, Germany)Mathias Ullrich (University of Koblenz, Koblenz, Germany)Ali Sunyaev (Karlsruhe Institute of Technology, Karlsruhe, Germany)Harald F. O.. von Korflesch (University of Koblenz, Koblenz, Germany)
Procrastination is the deliberate postponing of tasks knowing that it will have negative consequences in the future. Despite the potentially serious impact on mental and physical health, research has just started to explore the potential of information systems to help students combat procrastination. Specifically, while existing learning systems increasingly employ elements of game design to transform learning into an enjoyable and purposeful adventure, little is known about the effects of gameful approaches to overcome procrastination in academic settings. This study advances knowledge on gamification to counter procrastination by conducting a mixed-methods study among higher education students. Our results shed light on usage patterns and outcomes of gamification on self-efficacy, self-control, and procrastination behaviors. The findings contribute to theory by providing a better understanding of the potential of gamification to tackle procrastination. Practitioners are supported by implications on how to design gamified learning systems to support learners in self-organized work.
7
SolderlessPCB: Reusing Electronic Components in PCB Prototyping through Detachable 3D Printed Housings
Zeyu Yan (University Of Maryland, College Park, Maryland, United States)Jiasheng Li (University of Maryland, College Park, Maryland, United States)Zining Zhang (University of Maryland - College Park, College Park, Maryland, United States)Huaishu Peng (University of Maryland, College Park, Maryland, United States)
The iterative prototyping process for printed circuit boards (PCBs) frequently employs surface-mounted device (SMD) components, which are often discarded rather than reused due to the challenges associated with desoldering, leading to unnecessary electronic waste. This paper introduces SolderlessPCB, a collection of techniques for solder-free PCB prototyping, specifically designed to promote the recycling and reuse of electronic components. Central to this approach are custom 3D-printable housings that allow SMD components to be mounted onto PCBs without soldering. We detail the design of SolderlessPCB and the experiments conducted to evaluate its design parameters, electrical performance, and durability. To illustrate the potential for reusing SMD components with SolderlessPCB, we discuss two scenarios: the reuse of components from earlier design iterations and from obsolete prototypes. We also provide examples demonstrating that SolderlessPCB can handle high-current applications and is suitable for high-speed data transmission. The paper concludes by discussing the limitations of our approach and suggesting future directions to overcome these challenges.
6
PonDeFlick: A Japanese Text Entry on Smartwatch Commonalizing Flick Operation with Smartphone Interface
Kai Akamine (Doshisha University, Kyotanabe, Kyoto, Japan)Ryotaro Tsuchida (Doshisha University, Kyotanabe, Kyoto, Japan)Tsuneo Kato (Doshisha University, Kyotanabe, Japan)Akihiro Tamura (Doshisha University, Kyotanabe, Japan)
While the QWERTY keyboard is a standard text entry for Latin script languages on smart devices, it is not always true for non-Latin script languages. In Japanese, the most popular text entry on smartphones is a flick-based interface that systematically assigns more than fifty kana characters to twelve keys of a numeric keypad in combination with flick directions. Under these circumstances, studies on Japanese text entry on smartwatches have focused on an efficient interface design that takes advantage of the regularity of the kana consonant and vowel structure, but overlooked commonality with familiar interfaces. Thus, we propose PonDeFlick, a Japanese text entry that commonalizes the flick directions with the familiar smartphone interface while providing the entire touchscreen for gestural operation. A ten-day user study showed that PonDeFlick reached a text-entry speed of 57.7 characters per minute, significantly faster than the numeric-keypad-based interface and a modification of PonDeFlick without the commonality.
6
VeeR: Exploring the Feasibility of Deliberately Designing VR Motion that Diverges from Mundane, Everyday Physical Motion to Create More Entertaining VR Experiences
Pin Chun Lu (National Taiwan University, Taipei, Taiwan)Che Wei Wang (National Taiwan University, Taipei, Taiwan)Yu Lun Hsu (National Taiwan University, Taipei, Taiwan)Alvaro Lopez (National Taiwan University, Taipei, Taiwan)Ching-Yi Tsai (National Taiwan University, Taipei, Taiwan)Chiao-Ju Chang (National Taiwan University, Taipei, Taiwan)Wei Tian Mireille Tan (University of Illinois Urbana-Champaign, Champaign, Illinois, United States)LI-CHUN LU (National Taiwan University , Taipei , Taiwan)Mike Y.. Chen (National Taiwan University, Taipei, Taiwan)
This paper explores the feasibility of deliberately designing VR motion that diverges from users’ physical movements to turn mundane, everyday transportation motion (e.g., metros, trains, and cars) into more entertaining VR motion experiences, in contrast to prior car-based VR approaches that synchronize VR motion to physical car movement exactly. To gain insight into users’ preferences for veering rate and veering direction for turning (left/right) and pitching (up/down) during the three phases of acceleration (accelerating, cruising, and decelerating), we conducted a formative, perceptual study (n=24) followed by a VR experience evaluation (n=18), all conducted on metro trains moving in a mundane, straight-line motion. Results showed that participants preferred relatively high veering rates, and preferred pitching upward during acceleration and downward during deceleration. Furthermore, while veering decreased comfort as expected, it significantly enhanced immersion (p<.01) and entertainment (p<.001) and the overall experience, with comfort being considered, was preferred by 89% of participants.
6
MoodCapture: Depression Detection using In-the-Wild Smartphone Images
Subigya Kumar. Nepal (Dartmouth College, Hanover, New Hampshire, United States)Arvind Pillai (Dartmouth College, Hanover, New Hampshire, United States)Weichen Wang (Dartmouth College, Hanover, New Hampshire, United States)Tess Griffin (Dartmouth College, Hanover, New Hampshire, United States)Amanda C. Collins (Dartmouth College, Hanover, New Hampshire, United States)Michael Heinz (Dartmouth College, Hanover, New Hampshire, United States)Damien Lekkas (Dartmouth College Geisel School of Medicine, Lebanon, New Hampshire, United States)Shayan Mirjafari (Dartmouth College, Hanover, New Hampshire, United States)Matthew Nemesure (Dartmouth College, Hanover, New Hampshire, United States)George Price (Dartmouth College, Hanover, New Hampshire, United States)Nicholas Jacobson (Dartmouth College, Hanover, New Hampshire, United States)Andrew Campbell (Dartmouth College, Hanover, New Hampshire, United States)
MoodCapture presents a novel approach that assesses depression based on images automatically captured from the front-facing camera of smartphones as people go about their daily lives. We collect over 125,000 photos in the wild from N=177 participants diagnosed with major depressive disorder for 90 days. Images are captured naturalistically while participants respond to the PHQ-8 depression survey question: "I have felt down, depressed, or hopeless''. Our analysis explores important image attributes, such as angle, dominant colors, location, objects, and lighting. We show that a random forest trained with face landmarks can classify samples as depressed or non-depressed and predict raw PHQ-8 scores effectively. Our post-hoc analysis provides several insights through an ablation study, feature importance analysis, and bias assessment. Importantly, we evaluate user concerns about using MoodCapture to detect depression based on sharing photos, providing critical insights into privacy concerns that inform the future design of in-the-wild image-based mental health assessment tools.
6
PANDALens: Towards AI-Assisted In-Context Writing on OHMD During Travels
Runze Cai (National University of Singapore, Singapore, Singapore)Nuwan Janaka (National University of Singapore, Singapore, Singapore)Yang Chen (National University of Singapore, Singapore, Singapore)Lucia Wang (Massachusetts Institute of Technology, Cambridge, Massachusetts, United States)Shengdong Zhao (National University of Singapore, Singapore, Singapore)Can Liu (City University of Hong Kong, Hong Kong, China)
While effective for recording and sharing experiences, traditional in-context writing tools are relatively passive and unintelligent, serving more like instruments rather than companions. This reduces primary task (e.g., travel) enjoyment and hinders high-quality writing. Through formative study and iterative development, we introduce PANDALens, a Proactive AI Narrative Documentation Assistant built on an Optical See-Through Head Mounted Display that supports personalized documentation in everyday activities. PANDALens observes multimodal contextual information from user behaviors and environment to confirm interests and elicit contemplation, and employs Large Language Models to transform such multimodal information into coherent narratives with significantly reduced user effort. A real-world travel scenario comparing PANDALens with a smartphone alternative confirmed its effectiveness in improving writing quality and travel enjoyment while minimizing user effort. Accordingly, we propose design guidelines for AI-assisted in-context writing, highlighting the potential of transforming them from tools to intelligent companions.
6
The Sound of Support: Gendered Voice Agent as Support to Minority Teammates in Gender-Imbalanced Team
Angel Hsing-Chi Hwang (Cornell University, Ithaca, New York, United States)Andrea Stevenson Won (Cornell University, Ithaca, New York, United States)
The present work explores the potential of leveraging a teamwork agent's identity -- signaled through its gendered voice -- to support marginalized individuals in gender-imbalanced teams. In a mixed design experiment (N = 178), participants were randomly assigned to work with a female and a male voice agent in either a female-dominated or male-dominated team. Results show the presence of a same-gender voice agent is particularly beneficial to the performance of marginalized female members, such that they would contribute more ideas and talk more when a female agent was present. Conversely, marginalized male members became more talkative but were less focused on the teamwork tasks at hand when working with a male-sounding agent. The findings of the present experiment support existing literature on the effect of social presence in gender-imbalanced teams, such that gendered agents serve similar benefits as human teammates of the same gender identities. However, the effect of agents' presence remains limited when participants have experienced severe marginalization in the past. Based on findings from the present study, we discuss relevant design implications and avenues for future research.
6
DirectGPT: A Direct Manipulation Interface to Interact with Large Language Models
Damien Masson (University of Waterloo, Waterloo, Ontario, Canada)Sylvain Malacria (Univ. Lille, Inria, CNRS, Centrale Lille, UMR 9189 CRIStAL, Lille, France)Géry Casiez (Univ. Lille, CNRS, Inria, Centrale Lille, UMR 9189 CRIStAL, Lille, France)Daniel Vogel (University of Waterloo, Waterloo, Ontario, Canada)
We characterize and demonstrate how the principles of direct manipulation can improve interaction with large language models. This includes: continuous representation of generated objects of interest; reuse of prompt syntax in a toolbar of commands; manipulable outputs to compose or control the effect of prompts; and undo mechanisms. This idea is exemplified in DirectGPT, a user interface layer on top of ChatGPT that works by transforming direct manipulation actions to engineered prompts. A study shows participants were 50% faster and relied on 50% fewer and 72% shorter prompts to edit text, code, and vector images compared to baseline ChatGPT. Our work contributes a validated approach to integrate LLMs into traditional software using direct manipulation. Data, code, and demo available at https://osf.io/3wt6s.
6
Towards Designing a Question-Answering Chatbot for Online News: Understanding Questions and Perspectives
Md Naimul Hoque (University of Maryland, College Park, Maryland, United States)Ayman A. Mahfuz (The University of Texas at Austin, Austin, Texas, United States)Mayukha Sridhatri. Kindi (University of Maryland, College Park, Maryland, United States)Naeemul Hassan (University of Maryland, College Park, Maryland, United States)
Large Language Models (LLMs) have created opportunities for designing chatbots that can support complex question-answering (QA) scenarios and improve news audience engagement. However, we still lack an understanding of what roles journalists and readers deem fit for such a chatbot in newsrooms. To address this gap, we first interviewed six journalists to understand how they answer questions from readers currently and how they want to use a QA chatbot for this purpose. To understand how readers want to interact with a QA chatbot, we then conducted an online experiment (N=124) where we asked each participant to read three news articles and ask questions to either the author(s) of the articles or a chatbot. By combining results from the studies, we present alignments and discrepancies between how journalists and readers want to use QA chatbots and propose a framework for designing effective QA chatbots in newsrooms.
6
GazePointAR: A Context-Aware Multimodal Voice Assistant for Pronoun Disambiguation in Wearable Augmented Reality
Jaewook Lee (University of Washington, Seattle, Washington, United States)Jun Wang (University of Washington, Seattle, Washington, United States)Elizabeth Brown (University of Washington, Seattle, Washington, United States)Liam Chu (University of Washington, Seattle, Washington, United States)Sebastian S.. Rodriguez (University of Illinois at Urbana-Champaign, Urbana, Illinois, United States)Jon E.. Froehlich (University of Washington, Seattle, Washington, United States)
Voice assistants (VAs) like Siri and Alexa are transforming human-computer interaction; however, they lack awareness of users' spatiotemporal context, resulting in limited performance and unnatural dialogue. We introduce GazePointAR, a fully-functional context-aware VA for wearable augmented reality that leverages eye gaze, pointing gestures, and conversation history to disambiguate speech queries. With GazePointAR, users can ask "what's over there?" or "how do I solve this math problem?" simply by looking and/or pointing. We evaluated GazePointAR in a three-part lab study (N=12): (1) comparing GazePointAR to two commercial systems, (2) examining GazePointAR's pronoun disambiguation across three tasks; (3) and an open-ended phase where participants could suggest and try their own context-sensitive queries. Participants appreciated the naturalness and human-like nature of pronoun-driven queries, although sometimes pronoun use was counter-intuitive. We then iterated on GazePointAR and conducted a first-person diary study examining how GazePointAR performs in-the-wild. We conclude by enumerating limitations and design considerations for future context-aware VAs.
6
Blended Whiteboard: Physicality and Reconfigurability in Remote Mixed Reality Collaboration
Jens Emil Sloth. Grønbæk (Aarhus University, Aarhus, Denmark)Juan Sánchez Esquivel (Aarhus University, Aarhus, Denmark)Germán Leiva (Aarhus University, Aarhus, Denmark)Eduardo Velloso (University of Melbourne, Melbourne, Victoria, Australia)Hans Gellersen (Lancaster University, Lancaster, United Kingdom)Ken Pfeuffer (Aarhus University, Aarhus, Denmark)
The whiteboard is essential for collaborative work. To preserve its physicality in remote collaboration, Mixed Reality (MR) can blend real whiteboards across distributed spaces. Going beyond reality, MR can further enable interactions like panning and zooming in a virtually reconfigurable infinite whiteboard. However, this reconfigurability conflicts with the sense of physicality. To address this tension, we introduce Blended Whiteboard, a remote collaborative MR system enabling reconfigurable surface blending across distributed physical whiteboards. Blended Whiteboard supports a unique collaboration style, where users can sketch on their local whiteboards but also reconfigure the blended space to facilitate transitions between loosely and tightly coupled work. We describe design principles inspired by proxemics; supporting users in changing between facing each other and being side-by-side, and switching between navigating the whiteboard synchronously and independently. Our work shows exciting benefits and challenges of combining physicality and reconfigurability in the design of distributed MR whiteboards.
6
Look Once to Hear: Target Speech Hearing with Noisy Examples
Bandhav Veluri (University of Washington, SEATTLE, Washington, United States)Malek Itani (University of Washington, Seattle, Washington, United States)Tuochao Chen (Computer Science and Engineering, Seattle, Washington, United States)Takuya Yoshioka (IEEE, Redmond, Washington, United States)Shyamnath Gollakota (university of Washington, Seattle, Washington, United States)
In crowded settings, the human brain can focus on speech from a target speaker, given prior knowledge of how they sound. We introduce a novel intelligent hearable system that achieves this capability, enabling target speech hearing to ignore all interfering speech and noise, but the target speaker. A naive approach is to require a clean speech example to enroll the target speaker. This is however not well aligned with the hearable application domain since obtaining a clean example is challenging in real world scenarios, creating a unique user interface problem. We present the first enrollment interface where the wearer looks at the target speaker for a few seconds to capture a single, short, highly noisy, binaural example of the target speaker. This noisy example is used for enrollment and subsequent speech extraction in the presence of interfering speakers and noise. Our system achieves a signal quality improvement of 7.01 dB using less than 5 seconds of noisy enrollment audio and can process 8 ms of audio chunks in 6.24 ms on an embedded CPU. Our user studies demonstrate generalization to real-world static and mobile speakers in previously unseen indoor and outdoor multipath environments. Finally, our enrollment interface for noisy examples does not cause performance degradation compared to clean examples, while being convenient and user-friendly. Taking a step back, this paper takes an important step towards enhancing the human auditory perception with artificial intelligence.
6
Augmented Reality at Zoo Exhibits: A Design Framework for Enhancing the Zoo Experience
Brandon Victor. Syiem (Queensland University of Technology, Brisbane, Queensland, Australia)Sarah Webber (University of Melbourne, Melbourne, Victoria, Australia)Ryan M.. Kelly (University of Melbourne, Melbourne, VIC, Australia)Qiushi Zhou (University of Melbourne, Melbourne, Victoria, Australia)Jorge Goncalves (University of Melbourne, Melbourne, Australia)Eduardo Velloso (University of Melbourne, Melbourne, Victoria, Australia)
Augmented Reality (AR) offers unique opportunities for contributing to zoos' objectives of public engagement and education about animal and conservation issues. However, the diversity of animal exhibits pose challenges in designing AR applications that are not encountered in more controlled environments, such as museums. To support the design of AR applications that meaningfully engage the public with zoo objectives, we first conducted two scoping reviews to interrogate previous work on AR and broader technology use at zoos. We then conducted a workshop with zoo representatives to understand the challenges and opportunities in using AR to achieve zoo objectives. Additionally, we conducted a field trip to a public zoo to identify exhibit characteristics that impacts AR application design. We synthesise the findings from these studies into a framework that enables the design of diverse AR experiences. We illustrate the utility of the framework by presenting two concepts for feasible AR applications.
6
"Waves Push Me to Slumberland": Reducing Pre-Sleep Stress through Spatio-Temporal Tactile Displaying of Music.
Hui Zhang (Hunan University, Changsha, China)Ruixiao Zheng (Hunan University, Changsha, China)Shirao Yang (Hunan University, Changsha, China)Wanyi Wei (Hunan University, Changsha, China)Huafeng Shan (Keeson, Jiaxing, China)Jianwei Zhang (Keeson, Jiaxing, China)
Despite the fact that spatio-temporal patterns of vibration, characterized as rhythmic compositions of tactile content, have exhibited an ability to elicit specific emotional responses and enhance the emotion conveyed by music, limited research has explored their underlying mechanism in regulating emotional states within the pre-sleep context. Aiming to investigate whether synergistic spatio-temporal tactile displaying of music can facilitate relaxation before sleep, we developed 16 vibration patterns and an audio-tactile prototype for presenting an ambient experience in a pre-sleep scenario. The stress-reducing effects were further evaluated and compared via a user experiment. The results showed that the spatio-temporal tactile display of music significantly reduced stress and positively influenced users' emotional states before sleep. Furthermore, our study highlights the therapeutic potential of incorporating quantitative and adjustable spatio-temporal parameters correlated with subjective psychophysical perceptions in the audio-tactile experience for stress management.
6
The Social Journal: Investigating Technology to Support and Reflect on Social Interactions
Sophia Sakel (LMU Munich, Munich, Germany)Tabea Blenk (LMU Munich, Munich, Germany)Albrecht Schmidt (LMU Munich, Munich, Germany)Luke Haliburton (LMU Munich, Munich, Germany)
Social interaction is a crucial part of what it means to be human. Maintaining a healthy social life is strongly tied to positive outcomes for both physical and mental health. While we use personal informatics data to reflect on many aspects of our lives, technology-supported reflection for social interactions is currently under-explored. To address this, we first conducted an online survey (N=124) to understand how users want to be supported in their social interactions. Based on this, we designed and developed an app for users to track and reflect on their social interactions and deployed it in the wild for two weeks (N=25). Our results show that users are interested in tracking meaningful in-person interactions that are currently untraced and that an app can effectively support self-reflection on social interaction frequency and social load. We contribute insights and concrete design recommendations for technology-supported reflection for social interaction.
6
Patient Acceptance of Self-Monitoring on a Smartwatch in a Routine Digital Therapy: A Mixed-Methods Study
Camille Nadal (Trinity College Dublin, Dublin, Ireland)Caroline Earley (Thread Research, Dublin, Ireland)Angel Enrique (Amwell Science, Dublin, Ireland)Corina Sas (Lancaster University, Lancaster, United Kingdom)Derek Richards (Amwell Science, Dublin, Ireland)Gavin Doherty (Trinity College Dublin, Dublin, Ireland)
Self-monitoring of mood and lifestyle habits is the cornerstone of many therapies, but it is still hindered by persistent issues including inaccurate records, gaps in the monitoring, patient burden, and perceived stigma. Smartwatches have potential to deliver enhanced self-reports, but their acceptance in clinical mental health settings is unexplored and rendered difficult by a complex theoretical landscape and need for a longitudinal perspective. We present the Mood Monitor smartwatch application for mood and lifestyle habits self-monitoring. We investigated patient acceptance of the app within a routine 8-week digital therapy. We recruited 35 patients of the UK's National Health Service and evaluated their acceptance through three online questionnaires and a post-study interview. We assessed the clinical feasibility of the Mood Monitor by comparing clinical, usage, and acceptance metrics obtained from the 35 patients with smartwatch with those from an additional 34 patients without smartwatch (digital treatment as usual). Findings showed that the smartwatch app was highly accepted by patients, revealed which factors facilitated and impeded this acceptance, and supported clinical feasibility. We provide guidelines for the design of self-monitoring on smartwatch and reflect on the conduct of HCI research evaluating user acceptance of mental health technologies.
6
The Effects of Generative AI on Design Fixation and Divergent Thinking
Samangi Wadinambiarachchi (University of Melbourne, Melbourne, VIC, Australia)Ryan M.. Kelly (University of Melbourne, Melbourne, VIC, Australia)Saumya Pareek (University of Melbourne, Melbourne, Victoria, Australia)Qiushi Zhou (University of Melbourne, Melbourne, Victoria, Australia)Eduardo Velloso (University of Melbourne, Melbourne, Victoria, Australia)
Generative AI systems have been heralded as tools for augmenting human creativity and inspiring divergent thinking, though with little empirical evidence for these claims. This paper explores the effects of exposure to AI-generated images on measures of design fixation and divergent thinking in a visual ideation task. Through a between-participants experiment (N=60), we found that support from an AI image generator during ideation leads to higher fixation on an initial example. Participants who used AI produced fewer ideas, with less variety and lower originality compared to a baseline. Our qualitative analysis suggests that the effectiveness of co-ideation with AI rests on participants' chosen approach to prompt creation and on the strategies used by participants to generate ideas in response to the AI's suggestions. We discuss opportunities for designing generative AI systems for ideation support and incorporating these AI tools into ideation workflows.
6
Exploring the Lived Experience of Behavior Change Technologies: Towards an Existential Model of Behavior Change for HCI
Amon Rapp (University of Turin, Torino, Italy)Arianna Boldi (University of Turin, Torino, ITALY, Italy)
The majority of behavior change and persuasive technologies are exclusively addressed to modify a specific behavior. However, the focus on behavior may cloud the “existential aspects” of the process of change. To explore the lived and meaning-laden experience of behavior change, we interviewed 23 individuals who have used behavior change technology in their everyday life. The study findings highlight that behavior change is tied to meanings that point to existential matters, relates to a nexus of life circumstances, and unfolds over long periods of time. By contrast, the technology used by the participants appears mostly to focus on the present target behavior, ignoring its links to the participants’ life “context” and “time,” also providing scarce help for sense-making. Based on these findings, we surface a preliminary “existential model of behavior change,” identify several barriers that may prevent the modification of behavior and propose some design suggestions to overcome them.
6
CodeAid: Evaluating a Classroom Deployment of an LLM-based Programming Assistant that Balances Student and Educator Needs
Majeed Kazemitabaar (University of Toronto, Toronto, Ontario, Canada)Runlong Ye (University of Toronto, Toronto, Ontario, Canada)Xiaoning Wang (University of Toronto, Toronto, Ontario, Canada)Austin Henley (Microsoft, Redmond, Washington, United States)Paul Denny (The University of Auckland, Auckland, New Zealand)Michelle Craig (University of Toronto, Toronto, Ontario, Canada)Tovi Grossman (University of Toronto, Toronto, Ontario, Canada)
Timely, personalized feedback is essential for students learning programming. LLM-powered tools like ChatGPT offer instant support, but reveal direct answers with code, which may hinder deep conceptual engagement. We developed CodeAid, an LLM-powered programming assistant delivering helpful, technically correct responses, without revealing code solutions. CodeAid answers conceptual questions, generates pseudo-code with line-by-line explanations, and annotates student's incorrect code with fix suggestions. We deployed CodeAid in a programming class of 700 students for a 12-week semester. A thematic analysis of 8,000 usages of CodeAid was performed, further enriched by weekly surveys, and 22 student interviews. We then interviewed eight programming educators to gain further insights. Our findings reveal four design considerations for future educational AI assistants: D1) exploiting AI's unique benefits; D2) simplifying query formulation while promoting cognitive engagement; D3) avoiding direct responses while encouraging motivated learning; and D4) maintaining transparency and control for students to asses and steer AI responses.
6
Me, My Health, and My Watch: How Children with ADHD Understand Smartwatch Health Data
Elizabeth Ankrah (University of California, Irvine, Irvine, California, United States)Franceli L.. Cibrian (Chapman University, Orange, California, United States)Lucas M.. Silva (University of California, Irvine, Irvine, California, United States)Arya Tavakoulnia (University of California Irvine, Irvine, California, United States)Jesus Armando. Beltran (UCI, Irvine, California, United States)Sabrina Schuck (University of California Irvine, Irvine, California, United States)Kimberley D. Lakes (University of California Riverside, Riverside, California, United States)Gillian R. Hayes (University of California, Irvine, Irvine, California, United States)
Children with ADHD can experience a wide variety of challenges related to self-regulation, which can lead to poor educational, health, and wellness outcomes. Technological interventions, such as mobile and wearable health systems, can support data collection and reflection about health status. However, little is known about how ADHD children interpret such data. We conducted a deployment study with 10 children, aged 10 to 15, for six weeks, during which they used a smartwatch in their homes. Results from observations and interviews during this study indicate that children with ADHD can interpret their own health data, particularly at the moment. However, as ADHD children develop more autonomy, smartwatch systems may require alternatives for data reflection that are interpretable and actionable for them. This work contributes to the scholarly discourse around health data visualization, particularly in considering implications for the design of health technologies for children with ADHD.
6
Augmenting Perceived Length of Handheld Controllers: Effects of Object Handle Properties
Chaeyong Park (Pohang University of Science and Technology (POSTECH), Pohang, Gyungsangbuk, Korea, Republic of)Seungmoon Choi (Pohang University of Science and Technology (POSTECH), Pohang, Gyeongbuk, Korea, Republic of)
In the realm of virtual reality (VR), shape-changing controllers have emerged as a means to enhance visuo-haptic congruence during user interactions. The major emphasis has been placed on manipulating the inertia tensor of a shape-changing controller to control the perceived shape. This paper delves deeper by exploring how the material properties of the controller's handle, distinct from the inertial information, affect the perceived shape, focusing on the perceived length. We conducted three perceptual experiments to examine the effects of the handle's softness, thermal conductivity, and texture, respectively. Results demonstrated that a softer handle increases the perceived length, whereas a handle with higher thermal conductivity reduces it. Texture, in the form of varying bumps, also alters the length perception. These results provide more comprehensive knowledge of the intricate relationship between perceived length and controller handle properties, expanding the design alternatives for shape-changing controllers for immersive VR experiences.
6
Designing Multispecies Worlds for Robots, Cats, and Humans
Eike Schneiders (University of Nottingham, Nottingham, United Kingdom)Steven David. Benford (University of Nottingham, Nottingham, United Kingdom)Alan Chamberlain (University of Nottingham, Nottingham, United Kingdom)Clara Mancini (The Open University, Milton Keynes, United Kingdom)Simon D. Castle-Green (University of Nottingham, Nottingham, Nottinghamshire, United Kingdom)Victor Zhi Heung. Ngo (University of Nottingham, Nottingham, United Kingdom)Ju Row Farr (Blast Theory, Brighton, United Kingdom)Matt Adams (Blast Theory, Brighton, United Kingdom)Nick Tandavanitj (Blast Theory, Brighton, United Kingdom)Joel E. Fischer (University of Nottingham, Nottingham, United Kingdom)
We reflect on the design of a multispecies world centred around a bespoke enclosure in which three cats and a robot arm coexist for six hours a day during a twelve-day installation as part of an artist-led project. In this paper, we present the project's design process, encompassing various interconnected components, including the cats, the robot and its autonomous systems, the custom end-effectors and robot attachments, the diverse roles of the humans-in-the-loop, and the custom-designed enclosure. Subsequently, we provide a detailed account of key moments during the deployment and discuss the design implications for future multispecies systems. Specifically, we argue that designing the technology and its interactions is not sufficient, but that it is equally important to consider the design of the `world' in which the technology operates. Finally, we highlight the necessity of human involvement in areas such as breakdown recovery, animal welfare, and their role as audience.
6
MindfulDiary: Harnessing Large Language Model to Support Psychiatric Patients' Journaling
Taewan Kim (KAIST, Daejeon, Korea, Republic of)Seolyeong Bae (Gwangju Institute of Science and Technology, Gwangju, Korea, Republic of)Hyun AH Kim (NAVER Cloud, Gyeonggi-do, Korea, Republic of)Su-woo Lee (Wonkwang university hospital, iksan-si, Korea, Republic of)Hwajung Hong (KAIST, Deajeon, Korea, Republic of)Chanmo Yang (Wonkwang University Hospital, Wonkwang University, Iksan, Jeonbuk, Korea, Republic of)Young-Ho Kim (NAVER AI Lab, Seongnam, Gyeonggi, Korea, Republic of)
Large Language Models (LLMs) offer promising opportunities in mental health domains, although their inherent complexity and low controllability elicit concern regarding their applicability in clinical settings. We present MindfulDiary, an LLM-driven journaling app that helps psychiatric patients document daily experiences through conversation. Designed in collaboration with mental health professionals, MindfulDiary takes a state-based approach to safely comply with the experts' guidelines while carrying on free-form conversations. Through a four-week field study involving 28 patients with major depressive disorder and five psychiatrists, we examined how MindfulDiary facilitates patients' journaling practice and clinical care. The study revealed that MindfulDiary supported patients in consistently enriching their daily records and helped clinicians better empathize with their patients through an understanding of their thoughts and daily contexts. Drawing on these findings, we discuss the implications of leveraging LLMs in the mental health domain, bridging the technical feasibility and their integration into clinical settings.
6
FabSound: Audio-Tactile and Affective Fabric Experiences Through Mid-air Haptics
Jing Xue (University College London, London, United Kingdom)Roberto Montano Murillo (Ultraleap, Bristol, United Kingdom)Christopher Dawes (University College London, London, United Kingdom)William Frier (Ultraleap, Bristol, United Kingdom)Patricia Cornelio (Ultraleap, Bristol, United Kingdom)Marianna Obrist (University College London, London, United Kingdom)
The sound produced when touching fabrics, like a blanket, often provides information regarding the fabric’s texture properties (e.g., its roughness). Fabric roughness is one of the most important aspects of assessing fabric tactile properties. Prior research has demonstrated that touch-related sounds can alter the perception of textures. However, understanding touch-related sound of digital fabric textures, and how they could convey affective responses remain a challenge. In this study, we mapped digital fabric textures using mid-air haptics stimuli and examined how auditory manipulation influences people’s roughness perception. Through qualitative interviews, participants detailed that while rubbing sounds smoothen fabric texture perception, pure tone sounds of 450Hz and 900Hz accent roughness perception. The rubbing sound of fabric evoked associations with soft-materials and led to more calming experiences. In addition, we discussed how haptic interaction can be extended to multisensory modes, revealing a new perspective of mapping multisensory experiences for digital fabrics.