注目の論文一覧

各カテゴリ上位30論文までを表示しています

The ACM CHI Conference on Human Factors in Computing Systems (https://chi2024.acm.org/)

5
Unlocking Understanding: An Investigation of Multimodal Communication in Virtual Reality Collaboration
Ryan Ghamandi (University of Central Florida, Orlando, Florida, United States)Ravi Kiran Kattoju (University of Central Florida, Orlando, Florida, United States)Yahya Hmaiti (University of Central Florida, Orlando, Florida, United States)Mykola Maslych (University of Central Florida, Orlando, Florida, United States)Eugene Matthew. Taranta (University of Central Florida, Orlando, Florida, United States)Ryan P. McMahan (University of Central Florida, Orlando, Florida, United States)Joseph LaViola (University of Central Florida, Orlando, Florida, United States)
Communication in collaboration, especially synchronous, remote communication, is crucial to the success of task-specific goals. Insufficient or excessive forms of communication may lead to detrimental effects on task performance while increasing mental fatigue. However, identifying which combinations of communication modalities provide the most efficient transfer of information in collaborative settings will greatly improve collaboration. To investigate this, we developed a remote, synchronous, asymmetric VR collaborative assembly task application, where users play the role of either mentor or mentee, and were exposed to different combinations of three communication modalities: voice, gestures, and gaze. Through task-based experiments with 25 pairs of participants (50 individuals), we evaluated quantitative and qualitative data and found that gaze did not differ significantly from multiple combinations of communication modalities. Our qualitative results indicate that mentees experienced more difficulty and frustration in completing tasks than mentors, with both types of users preferring all three modalities to be present.
4
MOSion: Gaze Guidance with Motion-triggered Visual Cues by Mosaic Patterns
Arisa Kohtani (Tokyo Institute of Technology, Tokyo, Japan)Shio Miyafuji (Tokyo Institute of Technology, Tokyo, Japan)Keishiro Uragaki (Aoyama Gakuin University, Tokyo, Japan)Hidetaka Katsuyama (Tokyo Institute of Technology, Tokyo, Japan)Hideki Koike (Tokyo Institute of Technology, Tokyo, Japan)
We propose a gaze-guiding method called MOSion to adjust the guiding strength reacted to observers’ motion based on a high-speed projector and the afterimage effect in the human vision system. Our method decomposes the target area into mosaic patterns to embed visual cues in the perceived images. The patterns can only direct the attention of the moving observers to the target area. The stopping observer can see the original image with little distortion because of light integration in the visual perception. The pre computation of the patterns provides the adaptive guiding effect without tracking devices and computational costs depending on the movements. The evaluation and the user study show that the mosaic decomposition enhances the perceived saliency with a few visual artifacts, especially in moving conditions. Our method embedded in white lights works in various situations such as planar posters, advertisements, and curved objects.
4
DiaryMate: Understanding User Perceptions and Experience in Human-AI Collaboration for Personal Journaling
Taewan Kim (KAIST, Daejeon, Korea, Republic of)Donghoon Shin (University of Washington, Seattle, Washington, United States)Young-Ho Kim (NAVER AI Lab, Seongnam, Gyeonggi, Korea, Republic of)Hwajung Hong (KAIST, Deajeon, Korea, Republic of)
With their generative capabilities, large language models (LLMs) have transformed the role of technological writing assistants from simple editors to writing collaborators. Such a transition emphasizes the need for understanding user perception and experience, such as balancing user intent and the involvement of LLMs across various writing domains in designing writing assistants. In this study, we delve into the less explored domain of personal writing, focusing on the use of LLMs in introspective activities. Specifically, we designed DiaryMate, a system that assists users in journal writing with LLM. Through a 10-day field study (N=24), we observed that participants used the diverse sentences generated by the LLM to reflect on their past experiences from multiple perspectives. However, we also observed that they are over-relying on the LLM, often prioritizing its emotional expressions over their own. Drawing from these findings, we discuss design considerations when leveraging LLMs in a personal writing practice.
4
Personalizing Privacy Protection With Individuals' Regulatory Focus: Would You Preserve or Enhance Your Information Privacy?
Reza Ghaiumy Anaraky (New York University, New York City, New York, United States)Yao Li (University of Central Florida, Orlando, Florida, United States)Hichang Cho (National University of Singapore, Singapore, Singapore)Danny Yuxing Huang (New York University, New York, New York, United States)Kaileigh Angela Byrne (Clemson University, Clemson, South Carolina, United States)Bart Knijnenburg (Clemson University, Clemson, South Carolina, United States)Oded Nov (New York University, New York, New York, United States)
In this study, we explore the effectiveness of persuasive messages endorsing the adoption of a privacy protection technology (IoT Inspector) tailored to individuals' regulatory focus (promotion or prevention). We explore if and how regulatory fit (i.e., tuning the goal-pursuit mechanism to individuals' internal regulatory focus) can increase persuasion and adoption. We conducted a between-subject experiment (N = 236) presenting participants with the IoT Inspector in gain ("Privacy Enhancing Technology"---PET) or loss ("Privacy Preserving Technology"---PPT) framing. Results show that the effect of regulatory fit on adoption is mediated by trust and privacy calculus processes: prevention-focused users who read the PPT message trust the tool more. Furthermore, privacy calculus favors using the tool when promotion-focused individuals read the PET message. We discuss the contribution of understanding the cognitive mechanisms behind regulatory fit in privacy decision-making to support privacy protection.
4
Using the Visual Language of Comics to Alter Sensations in Augmented Reality
Arpit Bhatia (University of Copenhagen, Copenhagen, Denmark)Henning Pohl (Aalborg University, Aalborg, Denmark)Teresa Hirzle (University of Copenhagen, Copenhagen, Denmark)Hasti Seifi (Arizona State University, Tempe, Arizona, United States)Kasper Hornbæk (University of Copenhagen, Copenhagen, Denmark)
Augmented Reality (AR) excels at altering what we see but non-visual sensations are difficult to augment. To augment non-visual sensations in AR, we draw on the visual language of comic books. Synthesizing comic studies, we create a design space describing how to use comic elements (e.g., onomatopoeia) to depict non-visual sensations (e.g., hearing). To demonstrate this design space, we built eight demos, such as speed lines to make a user think they are faster and smell lines to make a scent seem stronger. We evaluate these elements in a qualitative user study (N=20) where participants performed everyday tasks with comic elements added as augmentations. All participants stated feeling a change in perception for at least one sensation, with perceived changes detected by between four participants (touch) and 15 participants (hearing). The elements also had positive effects on emotion and user experience, even when participants did not feel changes in perception.
4
Predicting the Noticeability of Dynamic Virtual Elements in Virtual Reality
Zhipeng Li (Carnegie Mellon University, Pittsburgh, Pennsylvania, United States)Yi Fei Cheng (Carnegie Mellon University, Pittsburgh, Pennsylvania, United States)Yukang Yan (Carnegie Mellon University, Pittsburgh, Pennsylvania, United States)David Lindlbauer (Carnegie Mellon University, Pittsburgh, Pennsylvania, United States)
While Virtual Reality (VR) systems can present virtual elements such as notifications anywhere, designing them so they are not missed by or distracting to users is highly challenging for content creators. To address this challenge, we introduce a novel approach to predict the noticeability of virtual elements. It computes the visual saliency distribution of what users see, and analyzes the temporal changes of the distribution with respect to the dynamic virtual elements that are animated. The computed features serve as input for a long short-term memory (LSTM) model that predicts whether a virtual element will be noticed. Our approach is based on data collected from 24 users in different VR environments performing tasks such as watching a video or typing. We evaluate our approach (n = 12), and show that it can predict the timing of when users notice a change to a virtual element within 2.56 sec compared to a ground truth, and demonstrate the versatility of our approach with a set of applications. We believe that our predictive approach opens the path for computational design tools that assist VR content creators in creating interfaces that automatically adapt virtual elements based on noticeability.
4
Observer Effect in Social Media Use
Koustuv Saha (University of Illinois at Urbana-Champaign, Urbana, Illinois, United States)Pranshu Gupta (Georgia Institute of Technology, Atlanta, Georgia, United States)Gloria Mark (University of California, Irvine, Irvine, California, United States)Emre Kiciman (Microsoft Research, Redmond, Washington, United States)Munmun De Choudhury (Georgia Institute of Technology, Atlanta, Georgia, United States)
While social media data is a valuable source for inferring human behavior, its in-practice utility hinges on extraneous factors. Notable is the ``observer effect,'' where awareness of being monitored can alter people's social media use. We present a causal-inference study to examine this phenomenon on the longitudinal Facebook use of 300+ participants who voluntarily shared their data spanning an average of 82 months before and 5 months after study enrollment. We measured deviation from participants' expected social media use through time series analyses. Individuals with high cognitive ability and low neuroticism decreased posting immediately after enrollment, and those with high openness increased posting. The sharing of self-focused content decreased, while diverse topics emerged. We situate the findings within theories of self-presentation and self-consciousness. We discuss the implications of correcting observer effect in social media data-driven measurements, and how this phenomenon shines light on the ethics of these measurements.
4
Me, My Health, and My Watch: How Children with ADHD Understand Smartwatch Health Data
Elizabeth Ankrah (University of California, Irvine, Irvine, California, United States)Franceli L.. Cibrian (Chapman University, Orange, California, United States)Lucas M.. Silva (University of California, Irvine, Irvine, California, United States)Arya Tavakoulnia (University of California Irvine, Irvine, California, United States)Jesus Armando. Beltran (UCI, Irvine, California, United States)Sabrina Schuck (University of California Irvine, Irvine, California, United States)Kimberley D. Lakes (University of California Riverside, Riverside, California, United States)Gillian R. Hayes (University of California, Irvine, Irvine, California, United States)
Children with ADHD can experience a wide variety of challenges related to self-regulation, which can lead to poor educational, health, and wellness outcomes. Technological interventions, such as mobile and wearable health systems, can support data collection and reflection about health status. However, little is known about how ADHD children interpret such data. We conducted a deployment study with 10 children, aged 10 to 15, for six weeks, during which they used a smartwatch in their homes. Results from observations and interviews during this study indicate that children with ADHD can interpret their own health data, particularly at the moment. However, as ADHD children develop more autonomy, smartwatch systems may require alternatives for data reflection that are interpretable and actionable for them. This work contributes to the scholarly discourse around health data visualization, particularly in considering implications for the design of health technologies for children with ADHD.
4
Tagnoo: Enabling Smart Room-Scale Environments with RFID-Augmented Plywood
Yuning Su (Simon Fraser University, Burnaby, British Columbia, Canada)Tingyu Zhang (Simon Fraser University, Burnaby, British Columbia, Canada)Jiuen Feng (University of Science and Technology of China, Hefei, Anhui, China)Yonghao Shi (Simon Fraser University, Burnaby, British Columbia, Canada)Xing-Dong Yang (Simon Fraser University, Burnaby, British Columbia, Canada)Te-Yen Wu (Florida State University, Tallahassee, Florida, United States)
Tagnoo is a computational plywood augmented with RFID tags, aimed at empowering woodworkers to effortlessly create room-scale smart environments. Unlike existing solutions, Tagnoo does not necessitate technical expertise or disrupt established woodworking routines. This battery-free and cost-effective solution seamlessly integrates computation capabilities into plywood, while preserving its original appearance and functionality. In this paper, we explore various parameters that can influence Tagnoo's sensing performance and woodworking compatibility through a series of experiments. Additionally, we demonstrate the construction of a small office environment, comprising a desk, chair, shelf, and floor, all crafted by an experienced woodworker using conventional tools such as a table saw and screws while adhering to established construction workflows. Our evaluation confirms that the smart environment can accurately recognize 18 daily objects and user activities, such as a user sitting on the floor or a glass lunchbox placed on the desk, with over 90% accuracy.
4
Signs of the Smart City: Exploring the Limits and Opportunities of Transparency
Eric Corbett (Google Research, New York, New York, United States)Graham Dove (New York University, New York, New York, United States)
This paper reports on a research through design (RtD) inquiry into public perceptions of transparency of Internet of Things (IoT) sensors increasingly deployed within urban neighborhoods as part of smart city programs. In particular, we report on the results of three participatory design workshops during which 40 New York City residents used physical signage as a medium for materializing transparency concerns about several sensors. We found that people’s concerns went beyond making sensors more transparent but instead sought to reveal the technology’s interconnected social, political, and economic processes. Building from these findings, we highlight the opportunities to move from treating transparency as an object to treating it as an ongoing activity. We argue that this move opens opportunities for designers and policy-makers to provide meaningful and actionable transparency of smart cities.
4
Robot-Assisted Decision-Making: Unveiling the Role of Uncertainty Visualisation and Embodiment
Sarah Schömbs (The University of Melbourne, Melbourne, VIC, Australia)Saumya Pareek (University of Melbourne, Melbourne, Victoria, Australia)Jorge Goncalves (University of Melbourne, Melbourne, Australia)Wafa Johal (University of Melbourne, Melbourne, VIC, Australia)
Robots are embodied agents that act under several sources of uncertainty. When assisting humans in a collaborative task, robots need to communicate their uncertainty to help inform decisions. In this study, we examine the use of visualising a robot’s uncertainty in a high-stakes assisted decision-making task. In particular, we explore how different modalities of uncertainty visualisations (graphical display vs. the robot’s embodied behaviour) and confidence levels (low, high, 100%) conveyed by a robot affect the human decision-making and perception during a collaborative task. Our results show that these visualisations significantly impact how participants arrive to their decision as well as how they perceive the robot’s transparency across the different confidence levels. We highlight potential trade-offs and offer implications for robot-assisted decision-making. Our work contributes empirical insights on how humans make use of uncertainty visualisations conveyed by a robot in a critical robot-assisted decision-making scenario.
4
The Social Journal: Investigating Technology to Support and Reflect on Social Interactions
Sophia Sakel (LMU Munich, Munich, Germany)Tabea Blenk (LMU Munich, Munich, Germany)Albrecht Schmidt (LMU Munich, Munich, Germany)Luke Haliburton (LMU Munich, Munich, Germany)
Social interaction is a crucial part of what it means to be human. Maintaining a healthy social life is strongly tied to positive outcomes for both physical and mental health. While we use personal informatics data to reflect on many aspects of our lives, technology-supported reflection for social interactions is currently under-explored. To address this, we first conducted an online survey (N=124) to understand how users want to be supported in their social interactions. Based on this, we designed and developed an app for users to track and reflect on their social interactions and deployed it in the wild for two weeks (N=25). Our results show that users are interested in tracking meaningful in-person interactions that are currently untraced and that an app can effectively support self-reflection on social interaction frequency and social load. We contribute insights and concrete design recommendations for technology-supported reflection for social interaction.
3
Visual Noise Cancellation: Exploring Visual Discomfort and Opportunities for Vision Augmentations
Junlei Hong (University of Otago, Dunedin, New Zealand)Tobias Langlotz (University of Otago, Dunedin, New Zealand)Jonathan Sutton (University of Otago, Dunedin, New Zealand)Holger Regenbrecht (University of Otago, Dunedin, Otago, New Zealand)
Acoustic noise control or cancellation (ANC) is a commonplace component of modern audio headphones. ANC aims to actively mitigate disturbing environmental noise for a quieter and improved listening experience. ANC is digitally controlling frequency and amplitude characteristics of sound. Much less explored is visual noise and active visual noise control, which we address here. We first explore visual noise and scenarios in which visual noise arises based on findings from four workshops we conducted. We then introduce the concept of visual noise cancellation (VNC) and how it can be used to reduce identified effects of visual noise. In addition, we developed head-worn demonstration prototypes to practically explore the concept of active VNC with selected scenarios in a user study. Finally, we discuss the application of VNC, including vision augmentations that moderate the user's view of the environment to address perceptual needs and to provide augmented reality content.
3
Technology-Mediated Non-pharmacological Interventions for Dementia: Needs for and Challenges in Professional, Personalized and Multi-Stakeholder Collaborative Interventions
Yuling Sun (East China Normal University, Shanghai, China)Zhennan Yi (Beijing Normal University, Beijing, China)Xiaojuan Ma (Hong Kong University of Science and Technology, Hong Kong, Hong Kong)JUNYAN MAO (East China Normal University, Shanghai, China)Xin Tong (Duke Kunshan University, Kunshan, Suzhou, China)
Designing and using technologies to support Non-Pharmacological Interventions (NPI) for People with Dementia (PwD) has drawn increasing attention in HCI, with the potential expectations of higher user engagement and positive outcomes. Yet, technologies for NPI can only be valuable if practitioners successfully incorporate them into their ongoing intervention practices beyond a limited research period. Currently, we know little about how practitioners experience and perceive these technologies in practical NPI for PwD. In this paper, we investigate this question through observations of five in-person NPI activities and interviews with 11 therapists and 5 caregivers. Our findings elaborate the practical NPI workflow process and characteristics, and practitioners’ attitudes, experiences, and perceptions to technology-mediated NPI in practice. Generally, our participants emphasized practical NPI is a complex and professional practice, needing fine-grained, personalized evaluation and planning, and the practical executing process is situated, and multi-stakeholder collaborative. Yet, existing technologies often fail to consider these specific characteristics, which leads to limitations in practical effectiveness or sustainable use. Drawing on our findings, we discuss the possible implications for designing more useful and practical NPI intervention technologies.
3
A Robot Jumping the Queue: Expectations About Politeness and Power During Conflicts in Everyday Human-Robot Encounters
Franziska Babel (Linköping University, Linköping, Sweden)Robin Welsch (Aalto University, Espoo, Finland)Linda Miller (Ulm University, Ulm, Germany)Philipp Hock (Linköping University, Linköping, Sweden)Sam Thellman (Linköping University, Linköping, Sweden)Tom Ziemke (Linköping University, Linköping, Sweden)
Increasing encounters between people and autonomous service robots may lead to conflicts due to mismatches between human expectations and robot behaviour. This interactive online study (N = 335) investigated human-robot interactions at an elevator, focusing on the effect of communication and behavioural expectations on participants' acceptance and compliance. Participants evaluated a humanoid delivery robot primed as either submissive or assertive. The robot either matched or violated these expectations by using a command or appeal to ask for priority and then entering either first or waiting for the next ride. The results highlight that robots are less accepted if they violate expectations by entering first or using a command. Interactions were more effective if participants expected an assertive robot which then asked politely for priority and entered first. The findings emphasize the importance of power expectations in human-robot conflicts for the robot's evaluation and effectiveness in everyday situations.
3
Metaphors in Voice User Interfaces: A Slippery Fish
Smit Desai (University of Illinois, Urbana-Champaign, Champaign, Illinois, United States)Michael Bernard. Twidale (University of Illinois at Urbana-Champaign, Urbana, Illinois, United States)
We explore a range of different metaphors used for Voice User Interfaces (VUIs) by designers, end-users, manufacturers, and researchers using a novel framework derived from semi-structured interviews and a literature review. We focus less on the well-established idea of metaphors as a way for interface designers to help novice users learn how to interact with novel technology, and more on other ways metaphors can be used. We find that metaphors people use are contextually fluid, can change with the mode of conversation, and can reveal differences in how people perceive VUIs compared to other devices. Not all metaphors are helpful, and some may be offensive. Analyzing this broader class of metaphors can help understand, perhaps even predict problems. Metaphor analysis can be a low-cost tool to inspire design creativity and facilitate complex discussions about sociotechnical issues, enabling us to spot potential opportunities and problems in the situated use of technologies.
3
Understanding Users' Interaction with Login Notifications
Philipp Markert (Ruhr University Bochum, Bochum, Germany)Leona Lassak (Ruhr University Bochum, Bochum, Germany)Maximilian Golla (CISPA Helmholtz Center for Information Security, Saarbrücken, Germany)Markus Dürmuth (Leibniz University Hannover, Hannover, Germany)
Login notifications intend to inform users about sign-ins and help them protect their accounts from unauthorized access. Notifications are usually sent if a login deviates from previous ones, potentially indicating malicious activity. They contain information like the location, date, time, and device used to sign in. Users are challenged to verify whether they recognize the login (because it was them or someone they know) or to protect their account from unwanted access. In a user study, we explore users' comprehension, reactions, and expectations of login notifications. We utilize two treatments to measure users' behavior in response to notifications sent for a login they initiated or based on a malicious actor relying on statistical sign-in information. We find that users identify legitimate logins but need more support to halt malicious sign-ins. We discuss the identified problems and give recommendations for service providers to ensure usable and secure logins for everyone.
3
Mnemosyne - Supporting Reminiscence for Individuals with Dementia in Residential Care Settings
Andrea Baumann (Lancaster University, Lancaster, United Kingdom)Peter Shaw (Lancaster University, Lancaster, United Kingdom)Ludwig Trotter (Lancaster University, Lancaster, Lancashire, United Kingdom)Sarah Clinch (The University of Manchester, Manchester, United Kingdom)Nigel Davies (Lancaster University, Lancaster, United Kingdom)
Reminiscence is known to play an important part in helping to mitigate the effects of dementia. Within the HCI community, work has typically focused on supporting reminiscence at an individual or social level but less attention has been given to supporting reminiscence in residential care settings. This lack of research became particularly apparent during the COVID pandemic when traditional forms of reminiscence involving physical artefacts and face-to-face interactions became especially challenging. In this paper we report on the design, development and evaluation of a reminiscence system, deployed in a residential care home over a two-year-period that included the pandemic. Mnemosyne comprises a pervasive display network and a browser-based application whose adoption and use we explored using a mixed methods approach. Our findings offer insights that will help shape the development and evaluation of future systems, particularly those that use pervasive displays to support unsupervised reminiscence.
3
MindfulDiary: Harnessing Large Language Model to Support Psychiatric Patients' Journaling
Taewan Kim (KAIST, Daejeon, Korea, Republic of)Seolyeong Bae (Gwangju Institute of Science and Technology, Gwangju, Korea, Republic of)Hyun AH Kim (NAVER Cloud, Gyeonggi-do, Korea, Republic of)Su-woo Lee (Wonkwang university hospital, iksan-si, Korea, Republic of)Hwajung Hong (KAIST, Deajeon, Korea, Republic of)Chanmo Yang (Wonkwang University Hospital, Wonkwang University, Iksan, Jeonbuk, Korea, Republic of)Young-Ho Kim (NAVER AI Lab, Seongnam, Gyeonggi, Korea, Republic of)
Large Language Models (LLMs) offer promising opportunities in mental health domains, although their inherent complexity and low controllability elicit concern regarding their applicability in clinical settings. We present MindfulDiary, an LLM-driven journaling app that helps psychiatric patients document daily experiences through conversation. Designed in collaboration with mental health professionals, MindfulDiary takes a state-based approach to safely comply with the experts' guidelines while carrying on free-form conversations. Through a four-week field study involving 28 patients with major depressive disorder and five psychiatrists, we examined how MindfulDiary facilitates patients' journaling practice and clinical care. The study revealed that MindfulDiary supported patients in consistently enriching their daily records and helped clinicians better empathize with their patients through an understanding of their thoughts and daily contexts. Drawing on these findings, we discuss the implications of leveraging LLMs in the mental health domain, bridging the technical feasibility and their integration into clinical settings.
3
"It's a Fair Game", or Is It? Examining How Users Navigate Disclosure Risks and Benefits When Using LLM-Based Conversational Agents
Zhiping Zhang (Khoury College of Computer Sciences, Boston, Massachusetts, United States)Michelle Jia (Carnegie Mellon University, Pittsburgh, Pennsylvania, United States)Hao-Ping (Hank) Lee (Carnegie Mellon University, Pittsburgh, Pennsylvania, United States)Bingsheng Yao (Rensselaer Polytechnic Institute, Troy, New York, United States)Sauvik Das (Carnegie Mellon University, Pittsburgh, Pennsylvania, United States)Ada Lerner (Northeastern University, Boston, Massachusetts, United States)Dakuo Wang (Northeastern University, Boston, Massachusetts, United States)Tianshi Li (Northeastern University, Boston, Massachusetts, United States)
The widespread use of Large Language Model (LLM)-based conversational agents (CAs), especially in high-stakes domains, raises many privacy concerns. Building ethical LLM-based CAs that respect user privacy requires an in-depth understanding of the privacy risks that concern users the most. However, existing research, primarily model-centered, does not provide insight into users' perspectives. To bridge this gap, we analyzed sensitive disclosures in real-world ChatGPT conversations and conducted semi-structured interviews with 19 LLM-based CA users. We found that users are constantly faced with trade-offs between privacy, utility, and convenience when using LLM-based CAs. However, users' erroneous mental models and the dark patterns in system design limited their awareness and comprehension of the privacy risks. Additionally, the human-like interactions encouraged more sensitive disclosures, which complicated users' ability to navigate the trade-offs. We discuss practical design guidelines and the needs for paradigm shifts to protect the privacy of LLM-based CA users.
3
Decide Yourself or Delegate - User Preferences Regarding the Autonomy of Personal Privacy Assistants in Private IoT-Equipped Environments
Karola Marky (Ruhr-University Bochum, Bochum, Germany)Alina Stöver (Technische Universität Darmstadt, Darmstadt, Germany)Sarah Prange (University of the Bundeswehr Munich, Munich, Germany)Kira Bleck (TU Darmstadt, Darmstadt, Germany)Paul Gerber (Technische Universität Darmstadt, Darmstadt, Germany)Verena Zimmermann (ETH Zürich, Zürich, Switzerland)Florian Müller (LMU Munich, Munich, Germany)Florian Alt (University of the Bundeswehr Munich, Munich, Germany)Max Mühlhäuser (TU Darmstadt, Darmstadt, Germany)
Personalized privacy assistants (PPAs) communicate privacy-related decisions of their users to Internet of Things (IoT) devices. There are different ways to implement PPAs by varying the degree of autonomy or decision model. This paper investigates user perceptions of PPA autonomy models and privacy profiles - archetypes of individual privacy needs - as a basis for PPA decisions in private environments (e.g., a friend's home). We first explore how privacy profiles can be assigned to users and propose an assignment method. Next, we investigate user perceptions in 18 usage scenarios with varying contexts, data types and number of decisions in a study with 1126 participants. We found considerable differences between the profiles in settings with few decisions. If the number of decisions gets high (> 1/h), participants exclusively preferred fully autonomous PPAs. Finally, we discuss implications and recommendations for designing scalable PPAs that serve as privacy interfaces for future IoT devices.
3
Investigating Contextual Notifications to Drive Self-Monitoring in mHealth Apps for Weight Maintenance
Yu-Peng Chen (University of Florida, Gainesville, Florida, United States)Julia Woodward (University of South Florida , Tampa, Florida, United States)Dinank Bista (University of Florida, Gainesville, Florida, United States)Xuanpu Zhang (Department of CISE, University of Florida, Gainesville, Florida, United States)Ishvina Singh (University of Florida , Gainesville, Florida, United States)Oluwatomisin Obajemu (University of Florida, Gainesville, Florida, United States)Meena N. Shankar (University of Florida, Gainesville, Florida, United States)Kathryn M.. Ross (University of Florida, Gainesville, Florida, United States)Jaime Ruiz (University of Florida, Gainesville, Florida, United States)Lisa Anthony (University of Florida, Gainesville, Florida, United States)
Mobile health applications for weight maintenance offer self-monitoring as a tool to empower users to achieve health goals (e.g., losing weight); yet maintaining consistent self-monitoring over time proves challenging for users. These apps use push notifications to help increase users’ app engagement and reduce long-term attrition, but they are often ignored by users due to appearing at inopportune moments. Therefore, we analyzed whether delivering push notifications based on time alone or also considering user context (e.g., current activity) affected users’ engagement in a weight maintenance app, in a 4-week in-the-wild study with 30 participants. We found no difference in participants’ overall (across the day) self-monitoring frequency between the two conditions, but in the context-based condition, participants responded faster and more frequently to notifications, and logged their data more timely (as eating/exercising occurs). Our work informs the design of notifications in weight maintenance apps to improve their efficacy in promoting self-monitoring.
2
Spatial Gaze Markers: Supporting Effective Task Switching in Augmented Reality
Mathias N.. Lystbæk (Aarhus University, Aarhus, Denmark)Ken Pfeuffer (Aarhus University, Aarhus, Denmark)Tobias Langlotz (University of Otago, Dunedin, New Zealand)Jens Emil Sloth. Grønbæk (Aarhus University, Aarhus, Denmark)Hans Gellersen (Lancaster University, Lancaster, United Kingdom)
Task switching can occur frequently in daily routines with physical activity. In this paper, we introduce Spatial Gaze Markers, an augmented reality tool to support users in immediately returning to the last point of interest after an attention shift. The tool is task-agnostic, using only eye-tracking information to infer distinct points of visual attention and to mark the corresponding area in the physical environment. We present a user study that evaluates the effectiveness of Spatial Gaze Markers in simulated physical repair and inspection tasks against a no-marker baseline. The results give insights into how Spatial Gaze Markers affect user performance, task load, and experience of users with varying levels of task type and distractions. Our work is relevant to assist physical workers with simple AR techniques and render task switching faster with less effort.
2
LegacySphere: Facilitating Intergenerational Communication Through Perspective-Taking and Storytelling in Embodied VR
Chenxinran Shen (University of British Columbia, Vancouver, British Columbia, Canada)Joanna McGrenere (University of British Columbia, Vancouver, British Columbia, Canada)Dongwook Yoon (University of British Columbia, Vancouver, British Columbia, Canada)
Intergenerational communication can enhance well-being and family cohesion, but stereotypes and low empathy can be barriers to achieving effective communication. VR perspective-taking is a potential approach that is known to enhance understanding and empathy toward others by allowing a user to take another's viewpoint. In this study, we introduce LegacySphere, a novel VR perspective-taking experience leveraging the combination of embodiment, role-play, and storytelling. To explore LegacySphere's design and impact, we conducted an observational study involving five dyads with a one-generation gap. We found that LegacySphere promotes empathetic and reflexive intergenerational dialogue. Specifically, avatar embodiment encourages what we term "relationship cushioning,'' fostering a trustful, open environment for genuine communications. The blending of real and embodied identities prompts insightful questions, merging both perspectives. The experience also nurtures a sense of unity and stimulates reflections on aging. Our work highlights the potential of immersive technologies for enhancing empathetic intergenerational relationships.
2
ARCADIA: A Gamified Mixed Reality System for Emotional Regulation and Self-Compassion
José Luis Soler-Domínguez (Instituto Tecnológico de Informática, Valencia, Spain)Samuel Navas-Medrano (Instituto Tecnológico de Informática, Valencia, Spain)Patricia Pons (Instituto Tecnológico de Informática, Valencia, Spain)
Mental health and wellbeing have become one of the significant challenges in global society, for which emotional regulation strategies hold the potential to offer a transversal approach to addressing them. However, the persistently declining adherence of patients to therapeutic interventions, coupled with the limited applicability of current technological interventions across diverse individuals and diagnoses, underscores the need for innovative solutions. We present ARCADIA, a Mixed-Reality platform strategically co-designed with therapists to enhance emotional regulation and self-compassion. ARCADIA comprises several gamified therapeutic activities, with a strong emphasis on fostering patient motivation. Through a dual study involving therapists and mental health patients, we validate the fully functional prototype of ARCADIA. Encouraging results are observed in terms of system usability, user engagement, and therapeutic potential. These findings lead us to believe that the combination of Mixed Reality and gamified therapeutic activities could be a significant tool in the future of mental health.
2
Uncovering and Addressing Blink-Related Challenges in Using Eye Tracking for Interactive Systems
Jesse W. Grootjen (LMU Munich, Munich, Germany)Henrike Weingärtner (LMU Munich, Munich , Germany)Sven Mayer (LMU Munich, Munich, Germany)
Currently, interactive systems use physiological sensing to enable advanced functionalities. While eye tracking is a promising means to understand the user, eye tracking data inherently suffers from missing data due to blinks, which may result in reduced system performance. We conducted a literature review to understand how researchers deal with this issue. We uncovered that researchers often implemented their use-case-specific pipeline to overcome the issue, ranging from ignoring missing data to artificial interpolation. With these first insights, we run a large-scale analysis on 11 publicly available datasets to understand the impact of the various approaches on data quality and accuracy. By this, we highlight the pitfalls in data processing and which methods work best. Based on our results, we provide guidelines for handling eye tracking data for interactive systems. Further, we propose a standard data processing pipeline that allows researchers and practitioners to pre-process and standardize their data efficiently.
2
Sweating the Details: Emotion Recognition and the Influence of Physical Exertion in Virtual Reality Exergaming
Dominic Potts (University of Bath, Bath, United Kingdom)Zoe Broad (University of Bath, Bath, United Kingdom)Tarini Sehgal (University of Bath , Bath, United Kingdom)Joseph Hartley (University of Bath, Bath, United Kingdom)Eamonn O'Neill (University of Bath, Bath, United Kingdom)Crescent Jicol (University of Bath, Bath, United Kingdom)Christopher Clarke (University of Bath, Bath, United Kingdom)Christof Lutteroth (University of Bath, Bath, United Kingdom)
There is great potential for adapting Virtual Reality (VR) exergames based on a user's affective state. However, physical activity and VR interfere with physiological sensors, making affect recognition challenging. We conducted a study (n=72) in which users experienced four emotion inducing VR exergaming environments (happiness, sadness, stress and calmness) at three different levels of exertion (low, medium, high). We collected physiological measures through pupillometry, electrodermal activity, heart rate, and facial tracking, as well as subjective affect ratings. Our validated virtual environments, data, and analyses are openly available. We found that the level of exertion influences the way affect can be recognised, as well as affect itself. Furthermore, our results highlight the importance of data cleaning to account for environmental and interpersonal factors interfering with physiological measures. The results shed light on the relationships between physiological measures and affective states and inform design choices about sensors and data cleaning approaches for affective VR.
2
Understanding User Acceptance of Electrical Muscle Stimulation in Human-Computer Interaction
Sarah Faltaous (University Duisburg-Essen , Essen, Germany)Julie R.. Williamson (University of Glasgow, Glasgow, United Kingdom)Marion Koelle (OFFIS - Institute for Information Technology, Oldenburg, Germany)Max Pfeiffer (Aldi Sued, Muelheim a.d.R., NRW, Germany)Jonas Keppel (University of Duisburg-Essen, Essen, Germany)Stefan Schneegass (University of Duisburg-Essen, Essen, NRW, Germany)
Electrical Muscle Stimulation (EMS) has unique capabilities that can manipulate users' actions or perceptions, such as actuating user movement while walking, changing the perceived texture of food, and guiding movements for a user learning an instrument. These applications highlight the potential utility of EMS, but such benefits may be lost if users reject EMS. To investigate user acceptance of EMS, we conducted an online survey (N=101). We compared eight scenarios, six from HCI research applications and two from the sports and health domain. To gain further insights, we conducted in-depth interviews with a subset of the survey respondents (N=10). The results point to the challenges and potential of EMS regarding social and technological acceptance, showing that there is greater acceptance of applications that manipulate action than those that manipulate perception. The interviews revealed safety concerns and user expectations for the design and functionality of future EMS applications.
2
Narrating Fitness: Leveraging Large Language Models for Reflective Fitness Tracker Data Interpretation
Konstantin R.. Strömel (Osnabrück University, Osnabrück, Germany)Stanislas Henry (ENSEIRB-MATMECA Bordeaux, Bordeaux, France)Tim Johansson (Chalmers University of Technology, Gothenburg, Sweden)Jasmin Niess (University of Oslo, Oslo, Norway)Paweł W. Woźniak (Chalmers University of Technology, Gothenburg, Sweden)
While fitness trackers generate and present quantitative data, past research suggests that users often conceptualise their wellbeing in qualitative terms. This discrepancy between numeric data and personal wellbeing perception may limit the effectiveness of personal informatics tools in encouraging meaningful engagement with one’s wellbeing. In this work, we aim to bridge the gap between raw numeric metrics and users’ qualitative perceptions of wellbeing. In an online survey with $n=273$ participants, we used step data from fitness trackers and compared three presentation formats: standard charts, qualitative descriptions generated by an LLM (Large Language Model), and a combination of both. Our findings reveal that users experienced more reflection, focused attention and reward when presented with the generated qualitative data compared to the standard charts alone. Our work demonstrates how automatically generated data descriptions can effectively complement numeric fitness data, fostering a richer, more reflective engagement with personal wellbeing information.
2
Designing Haptic Feedback for Sequential Gestural Inputs
Shan Xu (Meta, Redmond, Washington, United States)Sarah Sykes (Meta, Redmond, Washington, United States)Parastoo Abtahi (Meta, Toronto, Ontario, Canada)Tovi Grossman (University of Toronto, Toronto, Ontario, Canada)Daylon Walden (Meta, Redmond, Washington, United States)Michael Glueck (Meta, Toronto, Ontario, Canada)Carine Rognon (Meta, Redmond, Washington, United States)
This work seeks to design and evaluate haptic feedback for sequential gestural inputs, where mid-air hand gestures are used to express system commands. Nine haptic patterns are first designed leveraging metaphors. To pursue efficient interaction, we examine the trade-off between pattern duration and recognition accuracy and find that durations as short as 0.3s-0.5s achieve roughly 80\%-90\% accuracy. We then examine the haptic design for sequential inputs, where we vary when the feedback for each gesture is provided, along with pattern duration, gesture sequence length, and age. Results show that providing haptic patterns right after detected hand gestures leads to significantly more efficient interaction compared with concatenating all haptic patterns after the gesture sequence. Moreover, the number of gestures had little impact on performance, but age is a significant predictor. Our results suggest that immediate feedback with 0.3s and 0.5s pattern duration would be recommended for younger and older users respectively.
2
A Systematic Review and Meta-analysis of the Effectiveness of Body Ownership Illusions in Virtual Reality
Aske Mottelson (IT University of Copenhagen, Copenhagen, Denmark)Andreea Muresan (University of Copenhagen, Copenhagen, Denmark)Kasper Hornbæk (University of Copenhagen, Copenhagen, Denmark)Guido Makransky (University of Copenhagen, Copenhagen, Denmark)
Body ownership illusions (BOIs) occur when participants experience that their actual body is replaced by a body shown in virtual reality (VR). Based on a systematic review of the cumulative evidence on BOIs from 111 research articles published in 2010 to 2021, this article summarizes the findings of empirical studies of BOIs. Following the PRISMA guidelines, the review points to diverse experimental practices for inducing and measuring body ownership. The two major components of embodiment measurement, body ownership and agency, are examined. The embodiment of virtual avatars generally leads to modest body ownership and slightly higher agency. We also find that BOI research lacks statistical power and standardization across tasks, measurement instruments, and analysis approaches. Furthermore, the reviewed studies showed a lack of clarity in fundamental terminology, constructs, and theoretical underpinnings. These issues restrict scientific advances on the major components of BOIs, and together impede scientific rigor and theory-building.
1
Investigating Effect of Altered Auditory Feedback on Self-Representation, Subjective Operator Experience, and Task Performance in Teleoperation of a Social Robot
Nami Ogawa (CyberAgent, Inc., Tokyo, Japan)Jun Baba (CyberAgent, Inc., Tokyo, Japan)Junya Nakanishi (Osaka Univ., Osaka, Japan)
Teleoperating social robots requires operators to ``speak as the robot,'' as local users would favor robots whose appearance and voice match. This study focuses on real-time altered auditory feedback (AAF), a method to transform the acoustic traits of one's speech and provide feedback to the speaker, to transform the operator's self-representation toward ``becoming the robot.'' To explore whether AAF with voice transformation (VT) matched to the robot's appearance can influence the operator's self-representation and ease the task, we experimented with three conditions: no VT (No-VT), only VT (VT-only), and VT with AAF (VT-AAF), where participants teleoperated a robot to verbally serve real passersby at a bakery. The questionnaire results demonstrate that VT-AAF changed the participants' self-representation to match the robot's character and improved participants' subjective teleoperating experience, while task performance and implicit measures of self-representation were not significantly affected. Notably, 87\% of the participants preferred VT-AAF the most.
1
Augmented Reality Cues Facilitate Task Resumption after Interruptions in Computer-Based and Physical Tasks
Kilian L. Bahnsen (Julius-Maximilians-Universität Würzburg, Würzburg, Germany)Lucas Tiemann (Julius-Maximilians-Universität Würzburg, Würzburg, Germany)Lucas Plabst (Julius-Maximilians-University Würzburg, Würzburg, Germany)Tobias Grundgeiger (Julius-Maximilians-Universität Würzburg, Würzburg, Germany)
Many work domains include numerous interruptions, which can contribute to errors. We investigated the potential of augmented reality (AR) cues to facilitate primary task resumption after interruptions of varying lengths. Experiment 1 (N = 83) involved a computer-based primary task with a red AR arrow at the to-be-resumed task step which was placed via a gesture by the participants or automatically. Compared to no cue, both cues significantly reduced the resumption lag (i.e., the time between the end of the interruption and the resumption of the primary task) following long but not short interruptions. Experiment 2 (N = 38) involved a tangible sorting task, utilizing only the automatic cue. The AR cue facilitated task resumption compared to not cue after both short and long interruptions. We demonstrated the potential of AR cues in mitigating the negative effects of interruptions and make suggestions for integrating AR technologies for task resumption.
1
Using Low-frequency Sound to Create Non-contact Sensations On and In the Body
Waseem Hassan (University of Copenhagen, Copenhagen, Denmark)Asier Marzo (Universidad Publica de Navarra, Pamplona, Navarre, Spain)Kasper Hornbæk (University of Copenhagen, Copenhagen, Denmark)
This paper proposes a method for generating non-contact sensations using low-frequency sound waves without requiring user instrumentation. This method leverages the fundamental acoustic response of a confined space to produce predictable pressure spatial distributions at low frequencies, called modes. These modes can be used to produce sensations either throughout the body, in localized areas of the body, or within the body. We first validate the location and strength of the modes simulated by acoustic modeling. Next, a perceptual study is conducted to show how different frequencies produce qualitatively different sensations across and within the participants' bodies. The low-frequency sound offers a new way of delivering non-contact sensations throughout the body. The results indicate a high accuracy for predicting sensations at specific body locations.
1
Elastica: Adaptive Live Augmented Presentations with Elastic Mappings Across Modalities
Yining Cao (University of California, San Diego, San Diego, California, United States)Rubaiat Habib Kazi (Adobe Research, Seattle, Washington, United States)Li-Yi Wei (Adobe Research, San Jose, California, United States)Deepali Aneja (Adobe Research, Seattle, Washington, United States)Haijun Xia (University of California, San Diego, San Diego, California, United States)
Augmented presentations offer compelling storytelling by combining speech content, gestural performance, and animated graphics in a congruent manner. The expressiveness of these presentations stems from the harmonious coordination of spoken words and graphic elements, complemented by smooth animations aligned with the presenter's gestures. However, achieving such desired congruence in a live presentation poses significant challenges due to the unpredictability and imprecision inherent in presenters' real-time actions. Existing methods either leveraged rigid mapping without predefined states or required the presenters to conform to predefined animations. We introduce adaptive presentations that dynamically adjust predefined graphic animations to real-time speech and gestures. Our approach leverages script following and motion warping to establish elastic mappings that generate runtime graphic parameters coordinating speech, gesture, and predefined animation state. Our evaluation demonstrated that the proposed adaptive presentation can effectively mitigate undesired visual artifacts caused by performance deviations and enhance the expressiveness of resulting presentations.
1
ecSkin: Low-Cost Fabrication of Epidermal Electrochemical Sensors for Detecting Biomarkers in Sweat
Sai Nandan Panigrahy (Indian Institute of Technology Patna, Patna, India)Chang Hyeon. Lee (University of Calgary, Calgary, Alberta, Canada)Vrahant Nagoria (Indian Institute of Technology Kanpur , Kanpur, Uttar Pradesh, India)Mohammad Janghorban (University of Calgary, Calgary, Alberta, Canada)Richa Pandey (University of Calgary, Calgary, Alberta, Canada)Aditya Shekhar Nittala (University of Calgary, Calgary, Alberta, Canada)
The development of low-cost and non-invasive biosensors for monitoring electrochemical biomarkers in sweat holds great promise for personalized healthcare and early disease detection. In this work, we present ecSkin, a novel fabrication approach for realizing epidermal electrochemical sensors that can detect two vital biomarkers in sweat: glucose and cortisol. We contribute the synthesis of functional reusable inks, that can be formulated using simple household materials. Electrical characterization of inks indicates that they outperform commercially available carbon inks. Cyclic voltammetry experiments show that our inks are electrochemically active and detect glucose and cortisol at activation voltages of -0.36 V and -0.22 V, respectively. Chronoamperometry experiments show that the sensors can detect the full range of glucose and cortisol levels typically found in sweat. Results from a user evaluation show that ecSkin sensors successfully function on the skin. Finally, we demonstrate three applications to illustrate how ecSkin devices can be deployed for various interactive applications.
1
TimeTunnel: Integrating Spatial and Temporal Motion Editing for Character Animation in Virtual Reality
Qian Zhou (Autodesk Research, Toronto, Ontario, Canada)David Ledo (Autodesk Research, Toronto, Ontario, Canada)George Fitzmaurice (Autodesk Research, Toronto, Ontario, Canada)Fraser Anderson (Autodesk Research, Toronto, Ontario, Canada)
Editing character motion in Virtual Reality is challenging as it requires working with both spatial and temporal data using controls with multiple degrees of freedom. The spatial and temporal controls are separated, making it difficult to adjust poses over time and predict the effects across adjacent frames. To address this challenge, we propose TimeTunnel, an immersive motion editing interface that integrates spatial and temporal control for 3D character animation in VR. TimeTunnel provides an approachable editing experience via KeyPoses and Trajectories. KeyPoses are a set of representative poses automatically computed to concisely depict motion. Trajectories are 3D animation curves that pass through the joints of KeyPoses to represent in-betweens. TimeTunnel integrates spatial and temporal control by superimposing Trajectories and KeyPoses onto a 3D character. We conducted two studies to evaluate TimeTunnel. In our quantitative study, TimeTunnel reduced the amount of time required for editing motion, and saved effort in locating target poses. Our qualitative study with domain experts demonstrated how TimeTunnel is an approachable interface that can simplify motion editing, while still preserving a direct representation of motion.
1
EyeEcho: Continuous and Low-power Facial Expression Tracking on Glasses
Ke Li (Cornell University, Ithaca, New York, United States)Ruidong Zhang (Cornell University, Ithaca, New York, United States)Siyuan Chen (Cornell University, Ithaca, New York, United States)Boao Chen (Cornell University, Ithaca, New York, United States)Mose Sakashita (Cornell University, Ithaca, New York, United States)Francois Guimbretiere (Cornell University, Ithaca, New York, United States)Cheng Zhang (Cornell , Ithaca, New York, United States)
In this paper, we introduce EyeEcho, a minimally-obtrusive acoustic sensing system designed to enable glasses to continuously monitor facial expressions. It utilizes two pairs of speakers and microphones mounted on glasses, to emit encoded inaudible acoustic signals directed towards the face, capturing subtle skin deformations associated with facial expressions. The reflected signals are processed through a customized machine-learning pipeline to estimate full facial movements. EyeEcho samples at 83.3 Hz with a relatively low power consumption of 167 mW. Our user study involving 12 participants demonstrates that, with just four minutes of training data, EyeEcho achieves highly accurate tracking performance across different real-world scenarios, including sitting, walking, and after remounting the devices. Additionally, a semi-in-the-wild study involving 10 participants further validates EyeEcho's performance in naturalistic scenarios while participants engage in various daily activities. Finally, we showcase EyeEcho's potential to be deployed on a commercial-off-the-shelf (COTS) smartphone, offering real-time facial expression tracking.
1
How AI Processing Delays Foster Creativity: Exploring Research Question Co-Creation with an LLM-based Agent
Yiren Liu (University of Illinois at Urbana - Champaign, Champaign, Illinois, United States)Si Chen (University of Illinois at Urbana Champaign , Champaign, Illinois, United States)Haocong Cheng (University of Illinois Urbana-Champaign, Champaign, Illinois, United States)Mengxia Yu (University of Notre Dame, Notre Dame, Indiana, United States)Xiao Ran (University of Illinois at Urbana - Champaign, Champaign, Illinois, United States)Andrew Mo (University of Illinois at Urbana - Champaign, Champaign, Illinois, United States)Yiliu Tang (University of Illinois at Urbana - Champaign, Champaign, Illinois, United States)Yun Huang (University of Illinois at Urbana-Champaign, Champaign, Illinois, United States)
Developing novel research questions (RQs) often requires extensive literature reviews, especially in interdisciplinary fields. To support RQ development through human-AI co-creation, we leveraged Large Language Models (LLMs) to build an LLM-based agent system named CoQuest. We conducted an experiment with 20 HCI researchers to examine the impact of two interaction designs: breadth-first and depth-first RQ generation. The findings revealed that participants perceived the breadth-first approach as more creative and trustworthy upon task completion. Conversely, during the task, participants considered the depth-first generated RQs as more creative. Additionally, we discovered that AI processing delays allowed users to reflect on multiple RQs simultaneously, leading to a higher quantity of generated RQs and an enhanced sense of control. Our work makes both theoretical and practical contributions by proposing and evaluating a mental model for human-AI co-creation of RQs. We also address potential ethical issues, such as biases and over-reliance on AI, advocating for using the system to improve human research creativity rather than automating scientific inquiry. The system’s source is available at: https://github.com/yiren-liu/coquest.
1
ShareYourReality: Investigating Haptic Feedback and Agency in Virtual Avatar Co-embodiment
Karthikeya Puttur Venkatraj (Centrum Wiskunde & Informatica, Amsterdam, Netherlands)Wo Meijer (TU Delft, Delft, Netherlands)Monica Perusquia-Hernandez (Nara Institute of Science and Technolgy, Ikoma-shi, Nara, Japan)Gijs Huisman (Delft University of Technology, Delft, Netherlands)Abdallah El Ali (Centrum Wiskunde & Informatica (CWI), Amsterdam, Netherlands)
Virtual co-embodiment enables two users to share a single avatar in Virtual Reality (VR). During such experiences, the illusion of shared motion control can break during joint-action activities, highlighting the need for position-aware feedback mechanisms. Drawing on the perceptual crossing paradigm, we explore how haptics can enable non-verbal coordination between co-embodied participants. In a within-subjects study (20 participant pairs), we examined the effects of vibrotactile haptic feedback (None, Present) and avatar control distribution (25-75%, 50-50%, 75-25%) across two VR reaching tasks (Targeted, Free-choice) on participants’ Sense of Agency (SoA), co-presence, body ownership, and motion synchrony. We found (a) lower SoA in the free-choice with haptics than without, (b) higher SoA during the shared targeted task, (c) co-presence and body ownership were significantly higher in the free-choice task, (d) players’ hand motions synchronized more in the targeted task. We provide cautionary considerations when including haptic feedback mechanisms for avatar co-embodiment experiences.
1
Table Illustrator: Puzzle-based interactive authoring of plain tables
Yanwei Huang (Zhejiang University, Hangzhou, Zhejiang, China)Yurun Yang (Zhejiang University, Hangzhou, China)Xinhuan Shu (Newcastle University, Newcastle Upon Tyne, United Kingdom)Ran Chen (Zhejiang University, Hangzhou, Zhejiang, China)Di Weng (Zhejiang University, Hangzhou, China)Yingcai Wu (Zhejiang University, Hangzhou, Zhejiang, China)
Plain tables excel at displaying data details and are widely used in data presentation, often polished to an elaborate appearance for readability in many scenarios. However, existing authoring tools fail to provide both flexible and efficient support for altering the table layout and styles, motivating us to develop an intuitive and swift tool for table prototyping. To this end, we contribute Table Illustrator, a table authoring system taking a novel visual metaphor, puzzle, as the primary interaction unit. Through combinations and configurations on puzzles, the system enables rapid table construction and supports a diverse range of table layouts and styles. The tool design is informed by practical challenges and requirements from interviews with 10 table practitioners and a structured design space based on an analysis of over 2,500 real-world tables. User studies showed that Table Illustrator achieved comparable performance to Microsoft Excel while reducing users' completion time and perceived workload.
1
The Illusion of Empathy? Notes on Displays of Emotion in Human-Computer Interaction
Andrea Cuadra (Stanford University, Stanford, California, United States)Maria Wang (Stanford University, Stanford, California, United States)Lynn Andrea. Stein (Franklin W. Olin College of Engineering, Needham, Massachusetts, United States)Malte F. Jung (Cornell University, Ithaca, New York, United States)Nicola Dell (Cornell Tech, New York, New York, United States)Deborah Estrin (Cornell Tech, New York, New York, United States)James A.. Landay (Stanford University, Stanford, California, United States)
From ELIZA to Alexa, Conversational Agents (CAs) have been deliberately designed to elicit or project empathy. Although empathy can help technology better serve human needs, it can also be deceptive and potentially exploitative. In this work, we characterize empathy in interactions with CAs, highlighting the importance of distinguishing evocations of empathy between two humans from ones between a human and a CA. To this end, we systematically prompt CAs backed by large language models (LLMs) to display empathy while conversing with, or about, 65 distinct human identities, and also compare how different LLMs display or model empathy. We find that CAs make value judgments about certain identities, and can be encouraging of identities related to harmful ideologies (e.g., Nazism and xenophobia). Moreover, a computational approach to understanding empathy reveals that despite their ability to display empathy, CAs do poorly when interpreting and exploring a user's experience, contrasting with their human counterparts.
1
Outplay Your Weaker Self: A Mixed-Methods Study on Gamification to Overcome Procrastination in Academia
Jeanine Kirchner-Krath (Friedrich-Alexander-Universität Erlangen-Nuremberg, Nuremberg, Germany)Manuel Schmidt-Kraepelin (Institute of Applied Informatics and Formal Description Methods, Karlsruhe, Germany)Sofia Schöbel (Information Systems, Osnabrück, Germany)Mathias Ullrich (University of Koblenz, Koblenz, Germany)Ali Sunyaev (Karlsruhe Institute of Technology, Karlsruhe, Germany)Harald F. O.. von Korflesch (University of Koblenz, Koblenz, Germany)
Procrastination is the deliberate postponing of tasks knowing that it will have negative consequences in the future. Despite the potentially serious impact on mental and physical health, research has just started to explore the potential of information systems to help students combat procrastination. Specifically, while existing learning systems increasingly employ elements of game design to transform learning into an enjoyable and purposeful adventure, little is known about the effects of gameful approaches to overcome procrastination in academic settings. This study advances knowledge on gamification to counter procrastination by conducting a mixed-methods study among higher education students. Our results shed light on usage patterns and outcomes of gamification on self-efficacy, self-control, and procrastination behaviors. The findings contribute to theory by providing a better understanding of the potential of gamification to tackle procrastination. Practitioners are supported by implications on how to design gamified learning systems to support learners in self-organized work.
1
Synlogue with Aizuchi-bot: Investigating the Co-Adaptive and Open-Ended Interaction Paradigm
Kazumi Yoshimura (Waseda University, Sinjuku-ku, Tokyo, Japan)Dominique Chen (Waseda University, Shinjuku-ku, Tokyo, Japan)Olaf Witkowski (Crosslabs, Kyoto, Japan)
In contrast to dialogue, wherein the exchange of completed messages occurs through turn-taking, synlogue is a mode of conversation characterized by co-creative processes, such as mutually complementing incomplete utterances and cooperative overlaps of backchannelings. Such co-creative conversations have the potential to alleviate social divisions in contemporary information environments. This study proposed the design concept of a synlogue based on literature in linguistics and anthropology and explored features that facilitate synlogic interactions in computer-mediated interfaces. Through an experiment, we focused on aizuchi, an important backchanneling element that drives synlogic conversation, and compared the speech and perceptual changes of participants when a bot dynamically uttered aizuchi or otherwise silent in a situation simulating an online video call. Consequently, we discussed the implications for interaction design based on our qualitative and quantitative analysis of the experiment. The synlogic perspective presented in this study is expected to facilitate HCI researchers to achieve more convivial forms of communication.
1
Listening to the Voices: Describing Ethical Caveats of Conversational User Interfaces According to Experts and Frequent Users
Thomas Mildner (University of Bremen, Bremen, Germany)Orla Cooney (University College Dublin, Dublin, Ireland)Anna-Maria Meck (BMW Group, Munich, Germany)Marion Bartl (University College Dublin, Dublin, Ireland)Gian-Luca Savino (University of St. Gallen, St. Gallen, Switzerland)Philip R. Doyle (HMD Research, Dublin, Ireland)Diego Garaialde (University College Dublin, Dublin, Ireland)Leigh Clark (Bold Insight, UK, London, United Kingdom)John Sloan (university College Dublin, Dublin, Dublin, Ireland)Nina Wenig (University of Bremen, Bremen, Germany)Rainer Malaka (University of Bremen, Bremen, Germany)Jasmin Niess (University of Oslo, Oslo, Norway)
Advances in natural language processing and understanding have led to a rapid growth in the popularity of conversational user interfaces (CUIs). While CUIs introduce novel benefits, they also yield risks that may exploit people's trust. Although research looking at unethical design deployed through graphical user interfaces (GUIs) established a thorough taxonomy of so-called dark patterns, there is a need for an equally in-depth understanding in the context of CUIs. Addressing this gap, we interviewed 27 participants from three cohorts: researchers, practitioners, and frequent users of CUIs. Applying thematic analysis, we develop five themes reflecting each cohort's insights about ethical design challenges and introduce the CUI Expectation Cycle, bridging system capabilities and user expectations while respecting each theme's ethical caveats. This research aims to inform future work to consider ethical constraints while adopting a human-centred approach.
1
‘A Teaspoon of Authenticity’: Exploring How Young Adults BeReal on Social Media
Ananya Reddy (Pennsylvania State University , University Park, Pennsylvania, United States)Priya C.. Kumar (Pennsylvania State University, University Park, Pennsylvania, United States)
BeReal is the latest social media platform to tout itself as a more authentic space for connection. The app notifies users at a random time each day and gives users two minutes to post an image from their smartphone’s front- and back-facing cameras. While prior work has theorized authenticity on social media and studied how various user populations enact authenticity, more research is needed to understand whether and how specific design features afford authenticity. We conducted a walkthrough of the BeReal app and interviewed 31 young adults about their experiences on BeReal. We found that participants approached authenticity in two ways—as extemporaneous interaction and as comprehensive self-presentation—and that BeReal harnesses the affordances of visibility, editability, availability, and persistence in a way that enables the former more than the latter. Based on our findings, we offer four recommendations for designers and researchers who seek to support authenticity online.
1
Single-handed Folding Interactions with a Modified Clamshell Flip Phone
Yen-Ting Yeh (University of Waterloo, Waterloo, Ontario, Canada)Antony Albert Raj Irudayaraj (University of Waterloo, Waterloo, Ontario, Canada)Daniel Vogel (University of Waterloo, Waterloo, Ontario, Canada)
We explore and evaluate single-handed folding interactions suitable for “modified clamshell flip phones” with a full screen touch display that folds in half along the short dimension. Three categories of interactions are identified: only-fold, touch-enhanced fold, and fold-enhanced touch; in which gestures are created using fold direction, fold magnitude, and touch position. A prototype evaluation device is built to resemble clamshell flip phones, but with a modified hinge and spring system to enable folding in both directions. A study investigates performance and preference for 30 fold gestures to discover which are most promising. To demonstrate how folding interactions could be incorporated into flip phone interfaces, applications such as map browsing, text editing, and menu shortcuts are described.
1
ErgoPulse: Electrifying Your Lower Body With Biomechanical Simulation-based Electrical Muscle Stimulation Haptic System in Virtual Reality
Seokhyun Hwang (Gwangju Institute of Science and Technology, Gwangju, Korea, Republic of)Jeongseok Oh (Gwangju Institute of Science and Technology, Gwangju, Korea, Republic of)Seongjun Kang (Gwangju Institute of Science and Technology, Gwangju, Korea, Republic of)Minwoo Seong (Gwangju Institute of Science and Technology, Gwangju, Korea, Republic of)Ahmed Ibrahim Ahmed Mohamed. Elsharkawy (Gwangju Institute of Science and Technology, Gwangju , Korea, Republic of)SeungJun Kim (Gwangju Institute of Science and Technology, Gwangju, Korea, Republic of)
This study presents ErgoPulse, a system that integrates biomechanical simulation with electrical muscle stimulation (EMS) to provide kinesthetic force feedback to the lower-body in virtual reality (VR). ErgoPulse features two main parts: a biomechanical simulation part that calculates the lower-body joint torques to replicate forces from VR environments, and an EMS part that translates torques into muscle stimulations. In the first experiment, we assessed users' ability to discern haptic force intensity and direction, and observed variations in perceived resolution based on force direction. The second experiment evaluated ErgoPulse's ability to increase haptic force accuracy and user presence in both continuous and impulse force VR game environments. The experimental results showed that ErgoPulse's biomechanical simulation increased the accuracy of force delivery compared to traditional EMS, enhancing the overall user presence. Furthermore, the interviews proposed improvements to the haptic experience by integrating additional stimuli such as temperature, skin stretch, and impact.
1
Look Once to Hear: Target Speech Hearing with Noisy Examples
Bandhav Veluri (University of Washington, SEATTLE, Washington, United States)Malek Itani (University of Washington, Seattle, Washington, United States)Tuochao Chen (Computer Science and Engineering, Seattle, Washington, United States)Takuya Yoshioka (IEEE, Redmond, Washington, United States)Shyamnath Gollakota (university of Washington, Seattle, Washington, United States)
In crowded settings, the human brain can focus on speech from a target speaker, given prior knowledge of how they sound. We introduce a novel intelligent hearable system that achieves this capability, enabling target speech hearing to ignore all interfering speech and noise, but the target speaker. A naive approach is to require a clean speech example to enroll the target speaker. This is however not well aligned with the hearable application domain since obtaining a clean example is challenging in real world scenarios, creating a unique user interface problem. We present the first enrollment interface where the wearer looks at the target speaker for a few seconds to capture a single, short, highly noisy, binaural example of the target speaker. This noisy example is used for enrollment and subsequent speech extraction in the presence of interfering speakers and noise. Our system achieves a signal quality improvement of 7.01 dB using less than 5 seconds of noisy enrollment audio and can process 8 ms of audio chunks in 6.24 ms on an embedded CPU. Our user studies demonstrate generalization to real-world static and mobile speakers in previously unseen indoor and outdoor multipath environments. Finally, our enrollment interface for noisy examples does not cause performance degradation compared to clean examples, while being convenient and user-friendly. Taking a step back, this paper takes an important step towards enhancing the human auditory perception with artificial intelligence.
1
Simulating Emotions With an Integrated Computational Model of Appraisal and Reinforcement Learning
Jiayi Eurus. Zhang (University of Jyväskylä, JYVÄSKYLÄ, Finland)Bernhard Hilpert (Leiden University, Leiden, Netherlands)Joost Broekens (Leiden University, Leiden, Netherlands)Jussi P. P.. Jokinen (University of Jyväskylä, Jyväskylä, Finland)
Predicting users' emotional states during interaction is a long-standing goal of affective computing. However, traditional methods based on sensory data alone fall short due to the interplay between users' latent cognitive states and emotional responses. To address this, we introduce a computational cognitive model that simulates emotion as a continuous process, rather than a static state, during interactive episodes. This model integrates cognitive-emotional appraisal mechanisms with computational rationality, utilizing value predictions from reinforcement learning. Experiments with human participants demonstrate the model's ability to predict and explain the emergence of emotions such as happiness, boredom, and irritation during interactions. Our approach opens the possibility of designing interactive systems that adapt to users' emotional states, thereby improving user experience and engagement. This work also deepens our understanding of the potential of modeling the relationship between reward processing, reinforcement learning, goal-directed behavior, and appraisal.