注目の論文一覧

各カテゴリ上位30論文までを表示しています

The ACM CHI Conference on Human Factors in Computing Systems (https://chi2024.acm.org/)

5
Unlocking Understanding: An Investigation of Multimodal Communication in Virtual Reality Collaboration
Ryan Ghamandi (University of Central Florida, Orlando, Florida, United States)Ravi Kiran Kattoju (University of Central Florida, Orlando, Florida, United States)Yahya Hmaiti (University of Central Florida, Orlando, Florida, United States)Mykola Maslych (University of Central Florida, Orlando, Florida, United States)Eugene Matthew. Taranta (University of Central Florida, Orlando, Florida, United States)Ryan P. McMahan (University of Central Florida, Orlando, Florida, United States)Joseph LaViola (University of Central Florida, Orlando, Florida, United States)
Communication in collaboration, especially synchronous, remote communication, is crucial to the success of task-specific goals. Insufficient or excessive forms of communication may lead to detrimental effects on task performance while increasing mental fatigue. However, identifying which combinations of communication modalities provide the most efficient transfer of information in collaborative settings will greatly improve collaboration. To investigate this, we developed a remote, synchronous, asymmetric VR collaborative assembly task application, where users play the role of either mentor or mentee, and were exposed to different combinations of three communication modalities: voice, gestures, and gaze. Through task-based experiments with 25 pairs of participants (50 individuals), we evaluated quantitative and qualitative data and found that gaze did not differ significantly from multiple combinations of communication modalities. Our qualitative results indicate that mentees experienced more difficulty and frustration in completing tasks than mentors, with both types of users preferring all three modalities to be present.
4
Me, My Health, and My Watch: How Children with ADHD Understand Smartwatch Health Data
Elizabeth Ankrah (University of California, Irvine, Irvine, California, United States)Franceli L.. Cibrian (Chapman University, Orange, California, United States)Lucas M.. Silva (University of California, Irvine, Irvine, California, United States)Arya Tavakoulnia (University of California Irvine, Irvine, California, United States)Jesus Armando. Beltran (UCI, Irvine, California, United States)Sabrina Schuck (University of California Irvine, Irvine, California, United States)Kimberley D. Lakes (University of California Riverside, Riverside, California, United States)Gillian R. Hayes (University of California, Irvine, Irvine, California, United States)
Children with ADHD can experience a wide variety of challenges related to self-regulation, which can lead to poor educational, health, and wellness outcomes. Technological interventions, such as mobile and wearable health systems, can support data collection and reflection about health status. However, little is known about how ADHD children interpret such data. We conducted a deployment study with 10 children, aged 10 to 15, for six weeks, during which they used a smartwatch in their homes. Results from observations and interviews during this study indicate that children with ADHD can interpret their own health data, particularly at the moment. However, as ADHD children develop more autonomy, smartwatch systems may require alternatives for data reflection that are interpretable and actionable for them. This work contributes to the scholarly discourse around health data visualization, particularly in considering implications for the design of health technologies for children with ADHD.
4
Personalizing Privacy Protection With Individuals' Regulatory Focus: Would You Preserve or Enhance Your Information Privacy?
Reza Ghaiumy Anaraky (New York University, New York City, New York, United States)Yao Li (University of Central Florida, Orlando, Florida, United States)Hichang Cho (National University of Singapore, Singapore, Singapore)Danny Yuxing Huang (New York University, New York, New York, United States)Kaileigh Angela Byrne (Clemson University, Clemson, South Carolina, United States)Bart Knijnenburg (Clemson University, Clemson, South Carolina, United States)Oded Nov (New York University, New York, New York, United States)
In this study, we explore the effectiveness of persuasive messages endorsing the adoption of a privacy protection technology (IoT Inspector) tailored to individuals' regulatory focus (promotion or prevention). We explore if and how regulatory fit (i.e., tuning the goal-pursuit mechanism to individuals' internal regulatory focus) can increase persuasion and adoption. We conducted a between-subject experiment (N = 236) presenting participants with the IoT Inspector in gain ("Privacy Enhancing Technology"---PET) or loss ("Privacy Preserving Technology"---PPT) framing. Results show that the effect of regulatory fit on adoption is mediated by trust and privacy calculus processes: prevention-focused users who read the PPT message trust the tool more. Furthermore, privacy calculus favors using the tool when promotion-focused individuals read the PET message. We discuss the contribution of understanding the cognitive mechanisms behind regulatory fit in privacy decision-making to support privacy protection.
4
MOSion: Gaze Guidance with Motion-triggered Visual Cues by Mosaic Patterns
Arisa Kohtani (Tokyo Institute of Technology, Tokyo, Japan)Shio Miyafuji (Tokyo Institute of Technology, Tokyo, Japan)Keishiro Uragaki (Aoyama Gakuin University, Tokyo, Japan)Hidetaka Katsuyama (Tokyo Institute of Technology, Tokyo, Japan)Hideki Koike (Tokyo Institute of Technology, Tokyo, Japan)
We propose a gaze-guiding method called MOSion to adjust the guiding strength reacted to observers’ motion based on a high-speed projector and the afterimage effect in the human vision system. Our method decomposes the target area into mosaic patterns to embed visual cues in the perceived images. The patterns can only direct the attention of the moving observers to the target area. The stopping observer can see the original image with little distortion because of light integration in the visual perception. The pre computation of the patterns provides the adaptive guiding effect without tracking devices and computational costs depending on the movements. The evaluation and the user study show that the mosaic decomposition enhances the perceived saliency with a few visual artifacts, especially in moving conditions. Our method embedded in white lights works in various situations such as planar posters, advertisements, and curved objects.
4
Using the Visual Language of Comics to Alter Sensations in Augmented Reality
Arpit Bhatia (University of Copenhagen, Copenhagen, Denmark)Henning Pohl (Aalborg University, Aalborg, Denmark)Teresa Hirzle (University of Copenhagen, Copenhagen, Denmark)Hasti Seifi (Arizona State University, Tempe, Arizona, United States)Kasper Hornbæk (University of Copenhagen, Copenhagen, Denmark)
Augmented Reality (AR) excels at altering what we see but non-visual sensations are difficult to augment. To augment non-visual sensations in AR, we draw on the visual language of comic books. Synthesizing comic studies, we create a design space describing how to use comic elements (e.g., onomatopoeia) to depict non-visual sensations (e.g., hearing). To demonstrate this design space, we built eight demos, such as speed lines to make a user think they are faster and smell lines to make a scent seem stronger. We evaluate these elements in a qualitative user study (N=20) where participants performed everyday tasks with comic elements added as augmentations. All participants stated feeling a change in perception for at least one sensation, with perceived changes detected by between four participants (touch) and 15 participants (hearing). The elements also had positive effects on emotion and user experience, even when participants did not feel changes in perception.
4
Observer Effect in Social Media Use
Koustuv Saha (University of Illinois at Urbana-Champaign, Urbana, Illinois, United States)Pranshu Gupta (Georgia Institute of Technology, Atlanta, Georgia, United States)Gloria Mark (University of California, Irvine, Irvine, California, United States)Emre Kiciman (Microsoft Research, Redmond, Washington, United States)Munmun De Choudhury (Georgia Institute of Technology, Atlanta, Georgia, United States)
While social media data is a valuable source for inferring human behavior, its in-practice utility hinges on extraneous factors. Notable is the ``observer effect,'' where awareness of being monitored can alter people's social media use. We present a causal-inference study to examine this phenomenon on the longitudinal Facebook use of 300+ participants who voluntarily shared their data spanning an average of 82 months before and 5 months after study enrollment. We measured deviation from participants' expected social media use through time series analyses. Individuals with high cognitive ability and low neuroticism decreased posting immediately after enrollment, and those with high openness increased posting. The sharing of self-focused content decreased, while diverse topics emerged. We situate the findings within theories of self-presentation and self-consciousness. We discuss the implications of correcting observer effect in social media data-driven measurements, and how this phenomenon shines light on the ethics of these measurements.
4
The Social Journal: Investigating Technology to Support and Reflect on Social Interactions
Sophia Sakel (LMU Munich, Munich, Germany)Tabea Blenk (LMU Munich, Munich, Germany)Albrecht Schmidt (LMU Munich, Munich, Germany)Luke Haliburton (LMU Munich, Munich, Germany)
Social interaction is a crucial part of what it means to be human. Maintaining a healthy social life is strongly tied to positive outcomes for both physical and mental health. While we use personal informatics data to reflect on many aspects of our lives, technology-supported reflection for social interactions is currently under-explored. To address this, we first conducted an online survey (N=124) to understand how users want to be supported in their social interactions. Based on this, we designed and developed an app for users to track and reflect on their social interactions and deployed it in the wild for two weeks (N=25). Our results show that users are interested in tracking meaningful in-person interactions that are currently untraced and that an app can effectively support self-reflection on social interaction frequency and social load. We contribute insights and concrete design recommendations for technology-supported reflection for social interaction.
4
Predicting the Noticeability of Dynamic Virtual Elements in Virtual Reality
Zhipeng Li (Carnegie Mellon University, Pittsburgh, Pennsylvania, United States)Yi Fei Cheng (Carnegie Mellon University, Pittsburgh, Pennsylvania, United States)Yukang Yan (Carnegie Mellon University, Pittsburgh, Pennsylvania, United States)David Lindlbauer (Carnegie Mellon University, Pittsburgh, Pennsylvania, United States)
While Virtual Reality (VR) systems can present virtual elements such as notifications anywhere, designing them so they are not missed by or distracting to users is highly challenging for content creators. To address this challenge, we introduce a novel approach to predict the noticeability of virtual elements. It computes the visual saliency distribution of what users see, and analyzes the temporal changes of the distribution with respect to the dynamic virtual elements that are animated. The computed features serve as input for a long short-term memory (LSTM) model that predicts whether a virtual element will be noticed. Our approach is based on data collected from 24 users in different VR environments performing tasks such as watching a video or typing. We evaluate our approach (n = 12), and show that it can predict the timing of when users notice a change to a virtual element within 2.56 sec compared to a ground truth, and demonstrate the versatility of our approach with a set of applications. We believe that our predictive approach opens the path for computational design tools that assist VR content creators in creating interfaces that automatically adapt virtual elements based on noticeability.
4
Tagnoo: Enabling Smart Room-Scale Environments with RFID-Augmented Plywood
Yuning Su (Simon Fraser University, Burnaby, British Columbia, Canada)Tingyu Zhang (Simon Fraser University, Burnaby, British Columbia, Canada)Jiuen Feng (University of Science and Technology of China, Hefei, Anhui, China)Yonghao Shi (Simon Fraser University, Burnaby, British Columbia, Canada)Xing-Dong Yang (Simon Fraser University, Burnaby, British Columbia, Canada)Te-Yen Wu (Florida State University, Tallahassee, Florida, United States)
Tagnoo is a computational plywood augmented with RFID tags, aimed at empowering woodworkers to effortlessly create room-scale smart environments. Unlike existing solutions, Tagnoo does not necessitate technical expertise or disrupt established woodworking routines. This battery-free and cost-effective solution seamlessly integrates computation capabilities into plywood, while preserving its original appearance and functionality. In this paper, we explore various parameters that can influence Tagnoo's sensing performance and woodworking compatibility through a series of experiments. Additionally, we demonstrate the construction of a small office environment, comprising a desk, chair, shelf, and floor, all crafted by an experienced woodworker using conventional tools such as a table saw and screws while adhering to established construction workflows. Our evaluation confirms that the smart environment can accurately recognize 18 daily objects and user activities, such as a user sitting on the floor or a glass lunchbox placed on the desk, with over 90% accuracy.
4
Robot-Assisted Decision-Making: Unveiling the Role of Uncertainty Visualisation and Embodiment
Sarah Schömbs (The University of Melbourne, Melbourne, VIC, Australia)Saumya Pareek (University of Melbourne, Melbourne, Victoria, Australia)Jorge Goncalves (University of Melbourne, Melbourne, Australia)Wafa Johal (University of Melbourne, Melbourne, VIC, Australia)
Robots are embodied agents that act under several sources of uncertainty. When assisting humans in a collaborative task, robots need to communicate their uncertainty to help inform decisions. In this study, we examine the use of visualising a robot’s uncertainty in a high-stakes assisted decision-making task. In particular, we explore how different modalities of uncertainty visualisations (graphical display vs. the robot’s embodied behaviour) and confidence levels (low, high, 100%) conveyed by a robot affect the human decision-making and perception during a collaborative task. Our results show that these visualisations significantly impact how participants arrive to their decision as well as how they perceive the robot’s transparency across the different confidence levels. We highlight potential trade-offs and offer implications for robot-assisted decision-making. Our work contributes empirical insights on how humans make use of uncertainty visualisations conveyed by a robot in a critical robot-assisted decision-making scenario.
4
Signs of the Smart City: Exploring the Limits and Opportunities of Transparency
Eric Corbett (Google Research, New York, New York, United States)Graham Dove (New York University, New York, New York, United States)
This paper reports on a research through design (RtD) inquiry into public perceptions of transparency of Internet of Things (IoT) sensors increasingly deployed within urban neighborhoods as part of smart city programs. In particular, we report on the results of three participatory design workshops during which 40 New York City residents used physical signage as a medium for materializing transparency concerns about several sensors. We found that people’s concerns went beyond making sensors more transparent but instead sought to reveal the technology’s interconnected social, political, and economic processes. Building from these findings, we highlight the opportunities to move from treating transparency as an object to treating it as an ongoing activity. We argue that this move opens opportunities for designers and policy-makers to provide meaningful and actionable transparency of smart cities.
4
DiaryMate: Understanding User Perceptions and Experience in Human-AI Collaboration for Personal Journaling
Taewan Kim (KAIST, Daejeon, Korea, Republic of)Donghoon Shin (University of Washington, Seattle, Washington, United States)Young-Ho Kim (NAVER AI Lab, Seongnam, Gyeonggi, Korea, Republic of)Hwajung Hong (KAIST, Deajeon, Korea, Republic of)
With their generative capabilities, large language models (LLMs) have transformed the role of technological writing assistants from simple editors to writing collaborators. Such a transition emphasizes the need for understanding user perception and experience, such as balancing user intent and the involvement of LLMs across various writing domains in designing writing assistants. In this study, we delve into the less explored domain of personal writing, focusing on the use of LLMs in introspective activities. Specifically, we designed DiaryMate, a system that assists users in journal writing with LLM. Through a 10-day field study (N=24), we observed that participants used the diverse sentences generated by the LLM to reflect on their past experiences from multiple perspectives. However, we also observed that they are over-relying on the LLM, often prioritizing its emotional expressions over their own. Drawing from these findings, we discuss design considerations when leveraging LLMs in a personal writing practice.
3
Visual Noise Cancellation: Exploring Visual Discomfort and Opportunities for Vision Augmentations
Junlei Hong (University of Otago, Dunedin, New Zealand)Tobias Langlotz (University of Otago, Dunedin, New Zealand)Jonathan Sutton (University of Otago, Dunedin, New Zealand)Holger Regenbrecht (University of Otago, Dunedin, Otago, New Zealand)
Acoustic noise control or cancellation (ANC) is a commonplace component of modern audio headphones. ANC aims to actively mitigate disturbing environmental noise for a quieter and improved listening experience. ANC is digitally controlling frequency and amplitude characteristics of sound. Much less explored is visual noise and active visual noise control, which we address here. We first explore visual noise and scenarios in which visual noise arises based on findings from four workshops we conducted. We then introduce the concept of visual noise cancellation (VNC) and how it can be used to reduce identified effects of visual noise. In addition, we developed head-worn demonstration prototypes to practically explore the concept of active VNC with selected scenarios in a user study. Finally, we discuss the application of VNC, including vision augmentations that moderate the user's view of the environment to address perceptual needs and to provide augmented reality content.
3
Understanding Users' Interaction with Login Notifications
Philipp Markert (Ruhr University Bochum, Bochum, Germany)Leona Lassak (Ruhr University Bochum, Bochum, Germany)Maximilian Golla (CISPA Helmholtz Center for Information Security, Saarbrücken, Germany)Markus Dürmuth (Leibniz University Hannover, Hannover, Germany)
Login notifications intend to inform users about sign-ins and help them protect their accounts from unauthorized access. Notifications are usually sent if a login deviates from previous ones, potentially indicating malicious activity. They contain information like the location, date, time, and device used to sign in. Users are challenged to verify whether they recognize the login (because it was them or someone they know) or to protect their account from unwanted access. In a user study, we explore users' comprehension, reactions, and expectations of login notifications. We utilize two treatments to measure users' behavior in response to notifications sent for a login they initiated or based on a malicious actor relying on statistical sign-in information. We find that users identify legitimate logins but need more support to halt malicious sign-ins. We discuss the identified problems and give recommendations for service providers to ensure usable and secure logins for everyone.
3
Metaphors in Voice User Interfaces: A Slippery Fish
Smit Desai (University of Illinois, Urbana-Champaign, Champaign, Illinois, United States)Michael Bernard. Twidale (University of Illinois at Urbana-Champaign, Urbana, Illinois, United States)
We explore a range of different metaphors used for Voice User Interfaces (VUIs) by designers, end-users, manufacturers, and researchers using a novel framework derived from semi-structured interviews and a literature review. We focus less on the well-established idea of metaphors as a way for interface designers to help novice users learn how to interact with novel technology, and more on other ways metaphors can be used. We find that metaphors people use are contextually fluid, can change with the mode of conversation, and can reveal differences in how people perceive VUIs compared to other devices. Not all metaphors are helpful, and some may be offensive. Analyzing this broader class of metaphors can help understand, perhaps even predict problems. Metaphor analysis can be a low-cost tool to inspire design creativity and facilitate complex discussions about sociotechnical issues, enabling us to spot potential opportunities and problems in the situated use of technologies.
3
"It's a Fair Game", or Is It? Examining How Users Navigate Disclosure Risks and Benefits When Using LLM-Based Conversational Agents
Zhiping Zhang (Khoury College of Computer Sciences, Boston, Massachusetts, United States)Michelle Jia (Carnegie Mellon University, Pittsburgh, Pennsylvania, United States)Hao-Ping (Hank) Lee (Carnegie Mellon University, Pittsburgh, Pennsylvania, United States)Bingsheng Yao (Rensselaer Polytechnic Institute, Troy, New York, United States)Sauvik Das (Carnegie Mellon University, Pittsburgh, Pennsylvania, United States)Ada Lerner (Northeastern University, Boston, Massachusetts, United States)Dakuo Wang (Northeastern University, Boston, Massachusetts, United States)Tianshi Li (Northeastern University, Boston, Massachusetts, United States)
The widespread use of Large Language Model (LLM)-based conversational agents (CAs), especially in high-stakes domains, raises many privacy concerns. Building ethical LLM-based CAs that respect user privacy requires an in-depth understanding of the privacy risks that concern users the most. However, existing research, primarily model-centered, does not provide insight into users' perspectives. To bridge this gap, we analyzed sensitive disclosures in real-world ChatGPT conversations and conducted semi-structured interviews with 19 LLM-based CA users. We found that users are constantly faced with trade-offs between privacy, utility, and convenience when using LLM-based CAs. However, users' erroneous mental models and the dark patterns in system design limited their awareness and comprehension of the privacy risks. Additionally, the human-like interactions encouraged more sensitive disclosures, which complicated users' ability to navigate the trade-offs. We discuss practical design guidelines and the needs for paradigm shifts to protect the privacy of LLM-based CA users.
3
Mnemosyne - Supporting Reminiscence for Individuals with Dementia in Residential Care Settings
Andrea Baumann (Lancaster University, Lancaster, United Kingdom)Peter Shaw (Lancaster University, Lancaster, United Kingdom)Ludwig Trotter (Lancaster University, Lancaster, Lancashire, United Kingdom)Sarah Clinch (The University of Manchester, Manchester, United Kingdom)Nigel Davies (Lancaster University, Lancaster, United Kingdom)
Reminiscence is known to play an important part in helping to mitigate the effects of dementia. Within the HCI community, work has typically focused on supporting reminiscence at an individual or social level but less attention has been given to supporting reminiscence in residential care settings. This lack of research became particularly apparent during the COVID pandemic when traditional forms of reminiscence involving physical artefacts and face-to-face interactions became especially challenging. In this paper we report on the design, development and evaluation of a reminiscence system, deployed in a residential care home over a two-year-period that included the pandemic. Mnemosyne comprises a pervasive display network and a browser-based application whose adoption and use we explored using a mixed methods approach. Our findings offer insights that will help shape the development and evaluation of future systems, particularly those that use pervasive displays to support unsupervised reminiscence.
3
A Robot Jumping the Queue: Expectations About Politeness and Power During Conflicts in Everyday Human-Robot Encounters
Franziska Babel (Linköping University, Linköping, Sweden)Robin Welsch (Aalto University, Espoo, Finland)Linda Miller (Ulm University, Ulm, Germany)Philipp Hock (Linköping University, Linköping, Sweden)Sam Thellman (Linköping University, Linköping, Sweden)Tom Ziemke (Linköping University, Linköping, Sweden)
Increasing encounters between people and autonomous service robots may lead to conflicts due to mismatches between human expectations and robot behaviour. This interactive online study (N = 335) investigated human-robot interactions at an elevator, focusing on the effect of communication and behavioural expectations on participants' acceptance and compliance. Participants evaluated a humanoid delivery robot primed as either submissive or assertive. The robot either matched or violated these expectations by using a command or appeal to ask for priority and then entering either first or waiting for the next ride. The results highlight that robots are less accepted if they violate expectations by entering first or using a command. Interactions were more effective if participants expected an assertive robot which then asked politely for priority and entered first. The findings emphasize the importance of power expectations in human-robot conflicts for the robot's evaluation and effectiveness in everyday situations.
3
Technology-Mediated Non-pharmacological Interventions for Dementia: Needs for and Challenges in Professional, Personalized and Multi-Stakeholder Collaborative Interventions
Yuling Sun (East China Normal University, Shanghai, China)Zhennan Yi (Beijing Normal University, Beijing, China)Xiaojuan Ma (Hong Kong University of Science and Technology, Hong Kong, Hong Kong)JUNYAN MAO (East China Normal University, Shanghai, China)Xin Tong (Duke Kunshan University, Kunshan, Suzhou, China)
Designing and using technologies to support Non-Pharmacological Interventions (NPI) for People with Dementia (PwD) has drawn increasing attention in HCI, with the potential expectations of higher user engagement and positive outcomes. Yet, technologies for NPI can only be valuable if practitioners successfully incorporate them into their ongoing intervention practices beyond a limited research period. Currently, we know little about how practitioners experience and perceive these technologies in practical NPI for PwD. In this paper, we investigate this question through observations of five in-person NPI activities and interviews with 11 therapists and 5 caregivers. Our findings elaborate the practical NPI workflow process and characteristics, and practitioners’ attitudes, experiences, and perceptions to technology-mediated NPI in practice. Generally, our participants emphasized practical NPI is a complex and professional practice, needing fine-grained, personalized evaluation and planning, and the practical executing process is situated, and multi-stakeholder collaborative. Yet, existing technologies often fail to consider these specific characteristics, which leads to limitations in practical effectiveness or sustainable use. Drawing on our findings, we discuss the possible implications for designing more useful and practical NPI intervention technologies.
3
Investigating Contextual Notifications to Drive Self-Monitoring in mHealth Apps for Weight Maintenance
Yu-Peng Chen (University of Florida, Gainesville, Florida, United States)Julia Woodward (University of South Florida , Tampa, Florida, United States)Dinank Bista (University of Florida, Gainesville, Florida, United States)Xuanpu Zhang (Department of CISE, University of Florida, Gainesville, Florida, United States)Ishvina Singh (University of Florida , Gainesville, Florida, United States)Oluwatomisin Obajemu (University of Florida, Gainesville, Florida, United States)Meena N. Shankar (University of Florida, Gainesville, Florida, United States)Kathryn M.. Ross (University of Florida, Gainesville, Florida, United States)Jaime Ruiz (University of Florida, Gainesville, Florida, United States)Lisa Anthony (University of Florida, Gainesville, Florida, United States)
Mobile health applications for weight maintenance offer self-monitoring as a tool to empower users to achieve health goals (e.g., losing weight); yet maintaining consistent self-monitoring over time proves challenging for users. These apps use push notifications to help increase users’ app engagement and reduce long-term attrition, but they are often ignored by users due to appearing at inopportune moments. Therefore, we analyzed whether delivering push notifications based on time alone or also considering user context (e.g., current activity) affected users’ engagement in a weight maintenance app, in a 4-week in-the-wild study with 30 participants. We found no difference in participants’ overall (across the day) self-monitoring frequency between the two conditions, but in the context-based condition, participants responded faster and more frequently to notifications, and logged their data more timely (as eating/exercising occurs). Our work informs the design of notifications in weight maintenance apps to improve their efficacy in promoting self-monitoring.
3
MindfulDiary: Harnessing Large Language Model to Support Psychiatric Patients' Journaling
Taewan Kim (KAIST, Daejeon, Korea, Republic of)Seolyeong Bae (Gwangju Institute of Science and Technology, Gwangju, Korea, Republic of)Hyun AH Kim (NAVER Cloud, Gyeonggi-do, Korea, Republic of)Su-woo Lee (Wonkwang university hospital, iksan-si, Korea, Republic of)Hwajung Hong (KAIST, Deajeon, Korea, Republic of)Chanmo Yang (Wonkwang University Hospital, Wonkwang University, Iksan, Jeonbuk, Korea, Republic of)Young-Ho Kim (NAVER AI Lab, Seongnam, Gyeonggi, Korea, Republic of)
Large Language Models (LLMs) offer promising opportunities in mental health domains, although their inherent complexity and low controllability elicit concern regarding their applicability in clinical settings. We present MindfulDiary, an LLM-driven journaling app that helps psychiatric patients document daily experiences through conversation. Designed in collaboration with mental health professionals, MindfulDiary takes a state-based approach to safely comply with the experts' guidelines while carrying on free-form conversations. Through a four-week field study involving 28 patients with major depressive disorder and five psychiatrists, we examined how MindfulDiary facilitates patients' journaling practice and clinical care. The study revealed that MindfulDiary supported patients in consistently enriching their daily records and helped clinicians better empathize with their patients through an understanding of their thoughts and daily contexts. Drawing on these findings, we discuss the implications of leveraging LLMs in the mental health domain, bridging the technical feasibility and their integration into clinical settings.
3
Decide Yourself or Delegate - User Preferences Regarding the Autonomy of Personal Privacy Assistants in Private IoT-Equipped Environments
Karola Marky (Ruhr-University Bochum, Bochum, Germany)Alina Stöver (Technische Universität Darmstadt, Darmstadt, Germany)Sarah Prange (University of the Bundeswehr Munich, Munich, Germany)Kira Bleck (TU Darmstadt, Darmstadt, Germany)Paul Gerber (Technische Universität Darmstadt, Darmstadt, Germany)Verena Zimmermann (ETH Zürich, Zürich, Switzerland)Florian Müller (LMU Munich, Munich, Germany)Florian Alt (University of the Bundeswehr Munich, Munich, Germany)Max Mühlhäuser (TU Darmstadt, Darmstadt, Germany)
Personalized privacy assistants (PPAs) communicate privacy-related decisions of their users to Internet of Things (IoT) devices. There are different ways to implement PPAs by varying the degree of autonomy or decision model. This paper investigates user perceptions of PPA autonomy models and privacy profiles - archetypes of individual privacy needs - as a basis for PPA decisions in private environments (e.g., a friend's home). We first explore how privacy profiles can be assigned to users and propose an assignment method. Next, we investigate user perceptions in 18 usage scenarios with varying contexts, data types and number of decisions in a study with 1126 participants. We found considerable differences between the profiles in settings with few decisions. If the number of decisions gets high (> 1/h), participants exclusively preferred fully autonomous PPAs. Finally, we discuss implications and recommendations for designing scalable PPAs that serve as privacy interfaces for future IoT devices.
2
Spatial Gaze Markers: Supporting Effective Task Switching in Augmented Reality
Mathias N.. Lystbæk (Aarhus University, Aarhus, Denmark)Ken Pfeuffer (Aarhus University, Aarhus, Denmark)Tobias Langlotz (University of Otago, Dunedin, New Zealand)Jens Emil Sloth. Grønbæk (Aarhus University, Aarhus, Denmark)Hans Gellersen (Lancaster University, Lancaster, United Kingdom)
Task switching can occur frequently in daily routines with physical activity. In this paper, we introduce Spatial Gaze Markers, an augmented reality tool to support users in immediately returning to the last point of interest after an attention shift. The tool is task-agnostic, using only eye-tracking information to infer distinct points of visual attention and to mark the corresponding area in the physical environment. We present a user study that evaluates the effectiveness of Spatial Gaze Markers in simulated physical repair and inspection tasks against a no-marker baseline. The results give insights into how Spatial Gaze Markers affect user performance, task load, and experience of users with varying levels of task type and distractions. Our work is relevant to assist physical workers with simple AR techniques and render task switching faster with less effort.
2
Sweating the Details: Emotion Recognition and the Influence of Physical Exertion in Virtual Reality Exergaming
Dominic Potts (University of Bath, Bath, United Kingdom)Zoe Broad (University of Bath, Bath, United Kingdom)Tarini Sehgal (University of Bath , Bath, United Kingdom)Joseph Hartley (University of Bath, Bath, United Kingdom)Eamonn O'Neill (University of Bath, Bath, United Kingdom)Crescent Jicol (University of Bath, Bath, United Kingdom)Christopher Clarke (University of Bath, Bath, United Kingdom)Christof Lutteroth (University of Bath, Bath, United Kingdom)
There is great potential for adapting Virtual Reality (VR) exergames based on a user's affective state. However, physical activity and VR interfere with physiological sensors, making affect recognition challenging. We conducted a study (n=72) in which users experienced four emotion inducing VR exergaming environments (happiness, sadness, stress and calmness) at three different levels of exertion (low, medium, high). We collected physiological measures through pupillometry, electrodermal activity, heart rate, and facial tracking, as well as subjective affect ratings. Our validated virtual environments, data, and analyses are openly available. We found that the level of exertion influences the way affect can be recognised, as well as affect itself. Furthermore, our results highlight the importance of data cleaning to account for environmental and interpersonal factors interfering with physiological measures. The results shed light on the relationships between physiological measures and affective states and inform design choices about sensors and data cleaning approaches for affective VR.
2
LegacySphere: Facilitating Intergenerational Communication Through Perspective-Taking and Storytelling in Embodied VR
Chenxinran Shen (University of British Columbia, Vancouver, British Columbia, Canada)Joanna McGrenere (University of British Columbia, Vancouver, British Columbia, Canada)Dongwook Yoon (University of British Columbia, Vancouver, British Columbia, Canada)
Intergenerational communication can enhance well-being and family cohesion, but stereotypes and low empathy can be barriers to achieving effective communication. VR perspective-taking is a potential approach that is known to enhance understanding and empathy toward others by allowing a user to take another's viewpoint. In this study, we introduce LegacySphere, a novel VR perspective-taking experience leveraging the combination of embodiment, role-play, and storytelling. To explore LegacySphere's design and impact, we conducted an observational study involving five dyads with a one-generation gap. We found that LegacySphere promotes empathetic and reflexive intergenerational dialogue. Specifically, avatar embodiment encourages what we term "relationship cushioning,'' fostering a trustful, open environment for genuine communications. The blending of real and embodied identities prompts insightful questions, merging both perspectives. The experience also nurtures a sense of unity and stimulates reflections on aging. Our work highlights the potential of immersive technologies for enhancing empathetic intergenerational relationships.
2
Understanding User Acceptance of Electrical Muscle Stimulation in Human-Computer Interaction
Sarah Faltaous (University Duisburg-Essen , Essen, Germany)Julie R.. Williamson (University of Glasgow, Glasgow, United Kingdom)Marion Koelle (OFFIS - Institute for Information Technology, Oldenburg, Germany)Max Pfeiffer (Aldi Sued, Muelheim a.d.R., NRW, Germany)Jonas Keppel (University of Duisburg-Essen, Essen, Germany)Stefan Schneegass (University of Duisburg-Essen, Essen, NRW, Germany)
Electrical Muscle Stimulation (EMS) has unique capabilities that can manipulate users' actions or perceptions, such as actuating user movement while walking, changing the perceived texture of food, and guiding movements for a user learning an instrument. These applications highlight the potential utility of EMS, but such benefits may be lost if users reject EMS. To investigate user acceptance of EMS, we conducted an online survey (N=101). We compared eight scenarios, six from HCI research applications and two from the sports and health domain. To gain further insights, we conducted in-depth interviews with a subset of the survey respondents (N=10). The results point to the challenges and potential of EMS regarding social and technological acceptance, showing that there is greater acceptance of applications that manipulate action than those that manipulate perception. The interviews revealed safety concerns and user expectations for the design and functionality of future EMS applications.
2
A Systematic Review and Meta-analysis of the Effectiveness of Body Ownership Illusions in Virtual Reality
Aske Mottelson (IT University of Copenhagen, Copenhagen, Denmark)Andreea Muresan (University of Copenhagen, Copenhagen, Denmark)Kasper Hornbæk (University of Copenhagen, Copenhagen, Denmark)Guido Makransky (University of Copenhagen, Copenhagen, Denmark)
Body ownership illusions (BOIs) occur when participants experience that their actual body is replaced by a body shown in virtual reality (VR). Based on a systematic review of the cumulative evidence on BOIs from 111 research articles published in 2010 to 2021, this article summarizes the findings of empirical studies of BOIs. Following the PRISMA guidelines, the review points to diverse experimental practices for inducing and measuring body ownership. The two major components of embodiment measurement, body ownership and agency, are examined. The embodiment of virtual avatars generally leads to modest body ownership and slightly higher agency. We also find that BOI research lacks statistical power and standardization across tasks, measurement instruments, and analysis approaches. Furthermore, the reviewed studies showed a lack of clarity in fundamental terminology, constructs, and theoretical underpinnings. These issues restrict scientific advances on the major components of BOIs, and together impede scientific rigor and theory-building.
2
Uncovering and Addressing Blink-Related Challenges in Using Eye Tracking for Interactive Systems
Jesse W. Grootjen (LMU Munich, Munich, Germany)Henrike Weingärtner (LMU Munich, Munich , Germany)Sven Mayer (LMU Munich, Munich, Germany)
Currently, interactive systems use physiological sensing to enable advanced functionalities. While eye tracking is a promising means to understand the user, eye tracking data inherently suffers from missing data due to blinks, which may result in reduced system performance. We conducted a literature review to understand how researchers deal with this issue. We uncovered that researchers often implemented their use-case-specific pipeline to overcome the issue, ranging from ignoring missing data to artificial interpolation. With these first insights, we run a large-scale analysis on 11 publicly available datasets to understand the impact of the various approaches on data quality and accuracy. By this, we highlight the pitfalls in data processing and which methods work best. Based on our results, we provide guidelines for handling eye tracking data for interactive systems. Further, we propose a standard data processing pipeline that allows researchers and practitioners to pre-process and standardize their data efficiently.
2
Designing Haptic Feedback for Sequential Gestural Inputs
Shan Xu (Meta, Redmond, Washington, United States)Sarah Sykes (Meta, Redmond, Washington, United States)Parastoo Abtahi (Meta, Toronto, Ontario, Canada)Tovi Grossman (University of Toronto, Toronto, Ontario, Canada)Daylon Walden (Meta, Redmond, Washington, United States)Michael Glueck (Meta, Toronto, Ontario, Canada)Carine Rognon (Meta, Redmond, Washington, United States)
This work seeks to design and evaluate haptic feedback for sequential gestural inputs, where mid-air hand gestures are used to express system commands. Nine haptic patterns are first designed leveraging metaphors. To pursue efficient interaction, we examine the trade-off between pattern duration and recognition accuracy and find that durations as short as 0.3s-0.5s achieve roughly 80\%-90\% accuracy. We then examine the haptic design for sequential inputs, where we vary when the feedback for each gesture is provided, along with pattern duration, gesture sequence length, and age. Results show that providing haptic patterns right after detected hand gestures leads to significantly more efficient interaction compared with concatenating all haptic patterns after the gesture sequence. Moreover, the number of gestures had little impact on performance, but age is a significant predictor. Our results suggest that immediate feedback with 0.3s and 0.5s pattern duration would be recommended for younger and older users respectively.
2
ARCADIA: A Gamified Mixed Reality System for Emotional Regulation and Self-Compassion
José Luis Soler-Domínguez (Instituto Tecnológico de Informática, Valencia, Spain)Samuel Navas-Medrano (Instituto Tecnológico de Informática, Valencia, Spain)Patricia Pons (Instituto Tecnológico de Informática, Valencia, Spain)
Mental health and wellbeing have become one of the significant challenges in global society, for which emotional regulation strategies hold the potential to offer a transversal approach to addressing them. However, the persistently declining adherence of patients to therapeutic interventions, coupled with the limited applicability of current technological interventions across diverse individuals and diagnoses, underscores the need for innovative solutions. We present ARCADIA, a Mixed-Reality platform strategically co-designed with therapists to enhance emotional regulation and self-compassion. ARCADIA comprises several gamified therapeutic activities, with a strong emphasis on fostering patient motivation. Through a dual study involving therapists and mental health patients, we validate the fully functional prototype of ARCADIA. Encouraging results are observed in terms of system usability, user engagement, and therapeutic potential. These findings lead us to believe that the combination of Mixed Reality and gamified therapeutic activities could be a significant tool in the future of mental health.
2
Narrating Fitness: Leveraging Large Language Models for Reflective Fitness Tracker Data Interpretation
Konstantin R.. Strömel (Osnabrück University, Osnabrück, Germany)Stanislas Henry (ENSEIRB-MATMECA Bordeaux, Bordeaux, France)Tim Johansson (Chalmers University of Technology, Gothenburg, Sweden)Jasmin Niess (University of Oslo, Oslo, Norway)Paweł W. Woźniak (Chalmers University of Technology, Gothenburg, Sweden)
While fitness trackers generate and present quantitative data, past research suggests that users often conceptualise their wellbeing in qualitative terms. This discrepancy between numeric data and personal wellbeing perception may limit the effectiveness of personal informatics tools in encouraging meaningful engagement with one’s wellbeing. In this work, we aim to bridge the gap between raw numeric metrics and users’ qualitative perceptions of wellbeing. In an online survey with $n=273$ participants, we used step data from fitness trackers and compared three presentation formats: standard charts, qualitative descriptions generated by an LLM (Large Language Model), and a combination of both. Our findings reveal that users experienced more reflection, focused attention and reward when presented with the generated qualitative data compared to the standard charts alone. Our work demonstrates how automatically generated data descriptions can effectively complement numeric fitness data, fostering a richer, more reflective engagement with personal wellbeing information.
1
Table Illustrator: Puzzle-based interactive authoring of plain tables
Yanwei Huang (Zhejiang University, Hangzhou, Zhejiang, China)Yurun Yang (Zhejiang University, Hangzhou, China)Xinhuan Shu (Newcastle University, Newcastle Upon Tyne, United Kingdom)Ran Chen (Zhejiang University, Hangzhou, Zhejiang, China)Di Weng (Zhejiang University, Hangzhou, China)Yingcai Wu (Zhejiang University, Hangzhou, Zhejiang, China)
Plain tables excel at displaying data details and are widely used in data presentation, often polished to an elaborate appearance for readability in many scenarios. However, existing authoring tools fail to provide both flexible and efficient support for altering the table layout and styles, motivating us to develop an intuitive and swift tool for table prototyping. To this end, we contribute Table Illustrator, a table authoring system taking a novel visual metaphor, puzzle, as the primary interaction unit. Through combinations and configurations on puzzles, the system enables rapid table construction and supports a diverse range of table layouts and styles. The tool design is informed by practical challenges and requirements from interviews with 10 table practitioners and a structured design space based on an analysis of over 2,500 real-world tables. User studies showed that Table Illustrator achieved comparable performance to Microsoft Excel while reducing users' completion time and perceived workload.
1
Design Principles for Generative AI Applications
Justin D.. Weisz (IBM Research AI, Yorktown Heights, New York, United States)Jessica He (IBM Research, Yorktown Heights, New York, United States)Michael Muller (IBM Research, Cambridge, Massachusetts, United States)Gabriela Hoefer (IBM, New York, New York, United States)Rachel Miles (IBM Software, San Jose, California, United States)Werner Geyer (IBM Research, Cambridge, Massachusetts, United States)
Generative AI applications present unique design challenges. As generative AI technologies are increasingly being incorporated into mainstream applications, there is an urgent need for guidance on how to design user experiences that foster effective and safe use. We present six principles for the design of generative AI applications that address unique characteristics of generative AI UX and offer new interpretations and extensions of known issues in the design of AI applications. Each principle is coupled with a set of design strategies for implementing that principle via UX capabilities or through the design process. The principles and strategies were developed through an iterative process involving literature review, feedback from design practitioners, validation against real-world generative AI applications, and incorporation into the design process of two generative AI applications. We anticipate the principles to usefully inform the design of generative AI applications by driving actionable design recommendations.
1
How AI Processing Delays Foster Creativity: Exploring Research Question Co-Creation with an LLM-based Agent
Yiren Liu (University of Illinois at Urbana - Champaign, Champaign, Illinois, United States)Si Chen (University of Illinois at Urbana Champaign , Champaign, Illinois, United States)Haocong Cheng (University of Illinois Urbana-Champaign, Champaign, Illinois, United States)Mengxia Yu (University of Notre Dame, Notre Dame, Indiana, United States)Xiao Ran (University of Illinois at Urbana - Champaign, Champaign, Illinois, United States)Andrew Mo (University of Illinois at Urbana - Champaign, Champaign, Illinois, United States)Yiliu Tang (University of Illinois at Urbana - Champaign, Champaign, Illinois, United States)Yun Huang (University of Illinois at Urbana-Champaign, Champaign, Illinois, United States)
Developing novel research questions (RQs) often requires extensive literature reviews, especially in interdisciplinary fields. To support RQ development through human-AI co-creation, we leveraged Large Language Models (LLMs) to build an LLM-based agent system named CoQuest. We conducted an experiment with 20 HCI researchers to examine the impact of two interaction designs: breadth-first and depth-first RQ generation. The findings revealed that participants perceived the breadth-first approach as more creative and trustworthy upon task completion. Conversely, during the task, participants considered the depth-first generated RQs as more creative. Additionally, we discovered that AI processing delays allowed users to reflect on multiple RQs simultaneously, leading to a higher quantity of generated RQs and an enhanced sense of control. Our work makes both theoretical and practical contributions by proposing and evaluating a mental model for human-AI co-creation of RQs. We also address potential ethical issues, such as biases and over-reliance on AI, advocating for using the system to improve human research creativity rather than automating scientific inquiry. The system’s source is available at: https://github.com/yiren-liu/coquest.
1
Improving Attention Using Wearables via Haptic and Multimodal Rhythmic Stimuli
Nathan W. Whitmore (Massachusetts Institute of Technology, Cambridge, Massachusetts, United States)Samantha Chan (MIT Media Lab, Cambridge, Massachusetts, United States)Jingru Zhang (Department of Computer Science and Technology, Tsinghua University, Beijing, China)Patrick Chwalek (MIT, Cambridge, Massachusetts, United States)Sam Chin (MIT Media Lab, Cambridge, Massachusetts, United States)Pattie Maes (MIT Media Lab, Cambridge, Massachusetts, United States)
Rhythmic light, sound and haptic stimuli can improve cognition through neural entrainment and by modifying autonomic nervous system function. However, the effects and user experience of using wearables for inducing such rhythmic stimuli have been under-investigated. We conducted a study with 20 participants to understand the effects of rhythmic stimulation wearables on attention. We found that combined sound and light stimuli from a glasses device provided the strongest improvement to attention but were the least usable and socially acceptable. Haptic vibration stimuli from a wristband also improved attention and were the most usable and socially acceptable. Our field study (N=12) with haptic stimuli from a smartwatch showed that such systems can be easy to use and were used frequently in a range of contexts but more exploration is needed to improve the comfort. Our work contributes to developing future wearables to support attention and cognition.
1
Listening to the Voices: Describing Ethical Caveats of Conversational User Interfaces According to Experts and Frequent Users
Thomas Mildner (University of Bremen, Bremen, Germany)Orla Cooney (University College Dublin, Dublin, Ireland)Anna-Maria Meck (BMW Group, Munich, Germany)Marion Bartl (University College Dublin, Dublin, Ireland)Gian-Luca Savino (University of St. Gallen, St. Gallen, Switzerland)Philip R. Doyle (HMD Research, Dublin, Ireland)Diego Garaialde (University College Dublin, Dublin, Ireland)Leigh Clark (Bold Insight, UK, London, United Kingdom)John Sloan (university College Dublin, Dublin, Dublin, Ireland)Nina Wenig (University of Bremen, Bremen, Germany)Rainer Malaka (University of Bremen, Bremen, Germany)Jasmin Niess (University of Oslo, Oslo, Norway)
Advances in natural language processing and understanding have led to a rapid growth in the popularity of conversational user interfaces (CUIs). While CUIs introduce novel benefits, they also yield risks that may exploit people's trust. Although research looking at unethical design deployed through graphical user interfaces (GUIs) established a thorough taxonomy of so-called dark patterns, there is a need for an equally in-depth understanding in the context of CUIs. Addressing this gap, we interviewed 27 participants from three cohorts: researchers, practitioners, and frequent users of CUIs. Applying thematic analysis, we develop five themes reflecting each cohort's insights about ethical design challenges and introduce the CUI Expectation Cycle, bridging system capabilities and user expectations while respecting each theme's ethical caveats. This research aims to inform future work to consider ethical constraints while adopting a human-centred approach.
1
Elastica: Adaptive Live Augmented Presentations with Elastic Mappings Across Modalities
Yining Cao (University of California, San Diego, San Diego, California, United States)Rubaiat Habib Kazi (Adobe Research, Seattle, Washington, United States)Li-Yi Wei (Adobe Research, San Jose, California, United States)Deepali Aneja (Adobe Research, Seattle, Washington, United States)Haijun Xia (University of California, San Diego, San Diego, California, United States)
Augmented presentations offer compelling storytelling by combining speech content, gestural performance, and animated graphics in a congruent manner. The expressiveness of these presentations stems from the harmonious coordination of spoken words and graphic elements, complemented by smooth animations aligned with the presenter's gestures. However, achieving such desired congruence in a live presentation poses significant challenges due to the unpredictability and imprecision inherent in presenters' real-time actions. Existing methods either leveraged rigid mapping without predefined states or required the presenters to conform to predefined animations. We introduce adaptive presentations that dynamically adjust predefined graphic animations to real-time speech and gestures. Our approach leverages script following and motion warping to establish elastic mappings that generate runtime graphic parameters coordinating speech, gesture, and predefined animation state. Our evaluation demonstrated that the proposed adaptive presentation can effectively mitigate undesired visual artifacts caused by performance deviations and enhance the expressiveness of resulting presentations.
1
EyeEcho: Continuous and Low-power Facial Expression Tracking on Glasses
Ke Li (Cornell University, Ithaca, New York, United States)Ruidong Zhang (Cornell University, Ithaca, New York, United States)Siyuan Chen (Cornell University, Ithaca, New York, United States)Boao Chen (Cornell University, Ithaca, New York, United States)Mose Sakashita (Cornell University, Ithaca, New York, United States)Francois Guimbretiere (Cornell University, Ithaca, New York, United States)Cheng Zhang (Cornell , Ithaca, New York, United States)
In this paper, we introduce EyeEcho, a minimally-obtrusive acoustic sensing system designed to enable glasses to continuously monitor facial expressions. It utilizes two pairs of speakers and microphones mounted on glasses, to emit encoded inaudible acoustic signals directed towards the face, capturing subtle skin deformations associated with facial expressions. The reflected signals are processed through a customized machine-learning pipeline to estimate full facial movements. EyeEcho samples at 83.3 Hz with a relatively low power consumption of 167 mW. Our user study involving 12 participants demonstrates that, with just four minutes of training data, EyeEcho achieves highly accurate tracking performance across different real-world scenarios, including sitting, walking, and after remounting the devices. Additionally, a semi-in-the-wild study involving 10 participants further validates EyeEcho's performance in naturalistic scenarios while participants engage in various daily activities. Finally, we showcase EyeEcho's potential to be deployed on a commercial-off-the-shelf (COTS) smartphone, offering real-time facial expression tracking.
1
Single-handed Folding Interactions with a Modified Clamshell Flip Phone
Yen-Ting Yeh (University of Waterloo, Waterloo, Ontario, Canada)Antony Albert Raj Irudayaraj (University of Waterloo, Waterloo, Ontario, Canada)Daniel Vogel (University of Waterloo, Waterloo, Ontario, Canada)
We explore and evaluate single-handed folding interactions suitable for “modified clamshell flip phones” with a full screen touch display that folds in half along the short dimension. Three categories of interactions are identified: only-fold, touch-enhanced fold, and fold-enhanced touch; in which gestures are created using fold direction, fold magnitude, and touch position. A prototype evaluation device is built to resemble clamshell flip phones, but with a modified hinge and spring system to enable folding in both directions. A study investigates performance and preference for 30 fold gestures to discover which are most promising. To demonstrate how folding interactions could be incorporated into flip phone interfaces, applications such as map browsing, text editing, and menu shortcuts are described.
1
ErgoPulse: Electrifying Your Lower Body With Biomechanical Simulation-based Electrical Muscle Stimulation Haptic System in Virtual Reality
Seokhyun Hwang (Gwangju Institute of Science and Technology, Gwangju, Korea, Republic of)Jeongseok Oh (Gwangju Institute of Science and Technology, Gwangju, Korea, Republic of)Seongjun Kang (Gwangju Institute of Science and Technology, Gwangju, Korea, Republic of)Minwoo Seong (Gwangju Institute of Science and Technology, Gwangju, Korea, Republic of)Ahmed Ibrahim Ahmed Mohamed. Elsharkawy (Gwangju Institute of Science and Technology, Gwangju , Korea, Republic of)SeungJun Kim (Gwangju Institute of Science and Technology, Gwangju, Korea, Republic of)
This study presents ErgoPulse, a system that integrates biomechanical simulation with electrical muscle stimulation (EMS) to provide kinesthetic force feedback to the lower-body in virtual reality (VR). ErgoPulse features two main parts: a biomechanical simulation part that calculates the lower-body joint torques to replicate forces from VR environments, and an EMS part that translates torques into muscle stimulations. In the first experiment, we assessed users' ability to discern haptic force intensity and direction, and observed variations in perceived resolution based on force direction. The second experiment evaluated ErgoPulse's ability to increase haptic force accuracy and user presence in both continuous and impulse force VR game environments. The experimental results showed that ErgoPulse's biomechanical simulation increased the accuracy of force delivery compared to traditional EMS, enhancing the overall user presence. Furthermore, the interviews proposed improvements to the haptic experience by integrating additional stimuli such as temperature, skin stretch, and impact.
1
Evolving Presentation of Self: The Influence of Dementia Communication Challenges on Everyday Interactions
Yvon Ruitenburg (Eindhoven University of Technology, Eindhoven, Netherlands)Minha Lee (Eindhoven University of Technology, Eindhoven, Netherlands)Panos Markopoulos (Eindhoven University of Technology, Eindhoven, Netherlands)Wijnand IJsselsteijn (Technical university of Eindhoven, Eindhoven, Netherlands)
Communication can become challenging for people with dementia due to language, speech, discourse, and memory impairments. Although recent developments in Human-Computer Interaction have addressed some of these communication challenges, little is known about how they affect the self-presentation of people with dementia in everyday interactions. To understand this connection, we conducted interviews with sixteen people with dementia, six spouses, and fourteen formal caregivers. Our qualitative data revealed that people with dementia's presentation of competence, politeness, engagement, and reality are altered by communication challenges, which can impact their self-esteem, interactions, and relationships. Our study highlights the need for developing technologies that can enhance mutual understanding and acceptance of people with dementia's evolving presentation of self. Additionally, policy changes are required to reduce the stigma associated with communication challenges to foster social inclusion.
1
Fragmented Moments, Balanced Choices: How Do People Make Use of Their Waiting Time?
Jian Zheng (University of Maryland, College Park, Maryland, United States)Ge Gao (University of Maryland, College Park, Maryland, United States)
Everyone spends some time waiting every day. HCI research has developed tools for boosting productivity while waiting. However, little is known about how people naturally spend their waiting time. We conducted an experience sampling study with 21 working adults who used a mobile app to report their daily waiting time activities over two weeks. The aim of this study is to understand the activities people do while waiting and the effect of situational factors. We found that participants spent about 60% of their waiting time on leisure activities, 20% on productive activities, and 20% on maintenance activities. These choices are sensitive to situational factors, including accessible device, location, and certain routines of the day. Our study complements previous ones by demonstrating that people purpose waiting time for various goals beyond productivity and to maintain work-life balance. Our findings shed light on future empirical research and system design for time management.
1
Investigating Effect of Altered Auditory Feedback on Self-Representation, Subjective Operator Experience, and Task Performance in Teleoperation of a Social Robot
Nami Ogawa (CyberAgent, Inc., Tokyo, Japan)Jun Baba (CyberAgent, Inc., Tokyo, Japan)Junya Nakanishi (Osaka Univ., Osaka, Japan)
Teleoperating social robots requires operators to ``speak as the robot,'' as local users would favor robots whose appearance and voice match. This study focuses on real-time altered auditory feedback (AAF), a method to transform the acoustic traits of one's speech and provide feedback to the speaker, to transform the operator's self-representation toward ``becoming the robot.'' To explore whether AAF with voice transformation (VT) matched to the robot's appearance can influence the operator's self-representation and ease the task, we experimented with three conditions: no VT (No-VT), only VT (VT-only), and VT with AAF (VT-AAF), where participants teleoperated a robot to verbally serve real passersby at a bakery. The questionnaire results demonstrate that VT-AAF changed the participants' self-representation to match the robot's character and improved participants' subjective teleoperating experience, while task performance and implicit measures of self-representation were not significantly affected. Notably, 87\% of the participants preferred VT-AAF the most.
1
Using Low-frequency Sound to Create Non-contact Sensations On and In the Body
Waseem Hassan (University of Copenhagen, Copenhagen, Denmark)Asier Marzo (Universidad Publica de Navarra, Pamplona, Navarre, Spain)Kasper Hornbæk (University of Copenhagen, Copenhagen, Denmark)
This paper proposes a method for generating non-contact sensations using low-frequency sound waves without requiring user instrumentation. This method leverages the fundamental acoustic response of a confined space to produce predictable pressure spatial distributions at low frequencies, called modes. These modes can be used to produce sensations either throughout the body, in localized areas of the body, or within the body. We first validate the location and strength of the modes simulated by acoustic modeling. Next, a perceptual study is conducted to show how different frequencies produce qualitatively different sensations across and within the participants' bodies. The low-frequency sound offers a new way of delivering non-contact sensations throughout the body. The results indicate a high accuracy for predicting sensations at specific body locations.
1
FocusFlow: 3D Gaze-Depth Interaction in Virtual Reality Leveraging Active Visual Depth Manipulation
Chenyang Zhang (University of Illinois at Urbana-Champaign, Champaign, Illinois, United States)Tiansu Chen (University of Illinois at Urbana-Champaign, Urbana, Illinois, United States)Eric Shaffer (University of Illinois at Urbana-Champaign, Urbana, Illinois, United States)Elahe Soltanaghai (University of Illinois urbana Champaign, Urbana, Illinois, United States)
Gaze interaction presents a promising avenue in Virtual Reality (VR) due to its intuitive and efficient user experience. Yet, the depth control inherent in our visual system remains underutilized in current methods. In this study, we introduce FocusFlow, a hands-free interaction method that capitalizes on human visual depth perception within the 3D scenes of Virtual Reality. We first develop a binocular visual depth detection algorithm to understand eye input characteristics. We then propose a layer-based user interface and introduce the concept of "Virtual Window" that offers an intuitive and robust gaze-depth VR interaction, despite the constraints of visual depth accuracy and precision spatially at further distances. Finally, to help novice users actively manipulate their visual depth, we propose two learning strategies that use different visual cues to help users master visual depth control. Our user studies on 24 participants demonstrate the usability of our proposed virtual window concept as a gaze-depth interaction method. In addition, our findings reveal that the user experience can be enhanced through an effective learning process with adaptive visual cues, helping users to develop muscle memory for this brand-new input mechanism. We conclude the paper by discussing potential future research topics of gaze-depth interaction.
1
EITPose: Wearable and Practical Electrical Impedance Tomography for Continuous Hand Pose Estimation
Alexander Kyu (Human Computer Interaction Institute, Pittsburgh, Pennsylvania, United States)Hongyu Mao (Carnegie Mellon University, Pittsburgh, Pennsylvania, United States)Junyi Zhu (MIT CSAIL, Cambridge, Massachusetts, United States)Mayank Goel (Carnegie Mellon University, Pittsburgh, Pennsylvania, United States)Karan Ahuja (Northwestern University, Evanston, Illinois, United States)
Real-time hand pose estimation has a wide range of applications spanning gaming, robotics, and human-computer interaction. In this paper, we introduce EITPose, a wrist-worn, continuous 3D hand pose estimation approach that uses eight electrodes positioned around the forearm to model its interior impedance distribution during pose articulation. Unlike wrist-worn systems relying on cameras, EITPose has a slim profile (12 mm thick sensing strap) and is power-efficient (consuming only 0.3 W of power), making it an excellent candidate for integration into consumer electronic devices. In a user study involving 22 participants, EITPose achieves with a within-session mean per joint positional error of 11.06 mm. Its camera-free design prioritizes user privacy, yet it maintains cross-session and cross-user accuracy levels comparable to camera-based wrist-worn systems, thus making EITPose a promising technology for practical hand pose estimation.
1
ShareYourReality: Investigating Haptic Feedback and Agency in Virtual Avatar Co-embodiment
Karthikeya Puttur Venkatraj (Centrum Wiskunde & Informatica, Amsterdam, Netherlands)Wo Meijer (TU Delft, Delft, Netherlands)Monica Perusquia-Hernandez (Nara Institute of Science and Technolgy, Ikoma-shi, Nara, Japan)Gijs Huisman (Delft University of Technology, Delft, Netherlands)Abdallah El Ali (Centrum Wiskunde & Informatica (CWI), Amsterdam, Netherlands)
Virtual co-embodiment enables two users to share a single avatar in Virtual Reality (VR). During such experiences, the illusion of shared motion control can break during joint-action activities, highlighting the need for position-aware feedback mechanisms. Drawing on the perceptual crossing paradigm, we explore how haptics can enable non-verbal coordination between co-embodied participants. In a within-subjects study (20 participant pairs), we examined the effects of vibrotactile haptic feedback (None, Present) and avatar control distribution (25-75%, 50-50%, 75-25%) across two VR reaching tasks (Targeted, Free-choice) on participants’ Sense of Agency (SoA), co-presence, body ownership, and motion synchrony. We found (a) lower SoA in the free-choice with haptics than without, (b) higher SoA during the shared targeted task, (c) co-presence and body ownership were significantly higher in the free-choice task, (d) players’ hand motions synchronized more in the targeted task. We provide cautionary considerations when including haptic feedback mechanisms for avatar co-embodiment experiences.
1
Beyond the Blink: Investigating Combined Saccadic & Blink-Suppressed Hand Redirection in Virtual Reality
André Zenner (Saarland University, Saarland Informatics Campus, Saarbrücken, Germany)Chiara Karr (Saarland University, Saarland Informatics Campus, Saarbrücken, Germany)Martin Feick (German Research Center for Artificial Intelligence (DFKI), Saarland Informatics Campus, Saarbrücken, Germany)Oscar Ariza (Universität Hamburg, Hamburg, Germany)Antonio Krüger (Saarland University, Saarland Informatics Campus, Saarbrücken, Germany)
In pursuit of hand redirection techniques that are ever more tailored to human perception, we propose the first algorithm for hand redirection in virtual reality that makes use of saccades, i.e., fast ballistic eye movements that are accompanied by the perceptual phenomenon of change blindness. Our technique combines the previously proposed approaches of gradual hand warping and blink-suppressed hand redirection with the novel approach of saccadic redirection in one unified yet simple algorithm. We compare three variants of the proposed Saccadic & Blink-Suppressed Hand Redirection (SBHR) technique with the conventional approach to redirection in a psychophysical study (N=25). Our results highlight the great potential of our proposed technique for comfortable redirection by showing that SBHR allows for significantly greater magnitudes of unnoticeable redirection while being perceived as significantly less intrusive and less noticeable than commonly employed techniques that only use gradual hand warping.
1
TimeTunnel: Integrating Spatial and Temporal Motion Editing for Character Animation in Virtual Reality
Qian Zhou (Autodesk Research, Toronto, Ontario, Canada)David Ledo (Autodesk Research, Toronto, Ontario, Canada)George Fitzmaurice (Autodesk Research, Toronto, Ontario, Canada)Fraser Anderson (Autodesk Research, Toronto, Ontario, Canada)
Editing character motion in Virtual Reality is challenging as it requires working with both spatial and temporal data using controls with multiple degrees of freedom. The spatial and temporal controls are separated, making it difficult to adjust poses over time and predict the effects across adjacent frames. To address this challenge, we propose TimeTunnel, an immersive motion editing interface that integrates spatial and temporal control for 3D character animation in VR. TimeTunnel provides an approachable editing experience via KeyPoses and Trajectories. KeyPoses are a set of representative poses automatically computed to concisely depict motion. Trajectories are 3D animation curves that pass through the joints of KeyPoses to represent in-betweens. TimeTunnel integrates spatial and temporal control by superimposing Trajectories and KeyPoses onto a 3D character. We conducted two studies to evaluate TimeTunnel. In our quantitative study, TimeTunnel reduced the amount of time required for editing motion, and saved effort in locating target poses. Our qualitative study with domain experts demonstrated how TimeTunnel is an approachable interface that can simplify motion editing, while still preserving a direct representation of motion.
1
PaperTouch: Tangible Interfaces through Paper Craft and Touchscreen Devices
Qian Ye (National University of Singapore, Singapore, Singapore)Zhen Zhou Yong (National University of Singapore, Singapore, Singapore)Bo Han (National University of Singapore, Singapore, Singapore)Ching Chiuan Yen (National University of Singapore, Singapore, Singapore, Singapore)Clement Zheng (National University of Singapore, Singapore, Singapore)
Paper and touchscreen devices are two common objects found around us, and we investigated the potential of their intersection for tangible interface design. In this research, we developed PaperTouch, an approach to design paper based mechanisms that translate a variety of physical interactions to touch events on a capacitive touchscreen. These mechanisms act as switches that close during interaction, connecting the touchscreen to the device’s ground bus. To develop PaperTouch, we explored different types of paper along with the making process around them. We also built a range of applications to showcase different tangible interfaces facilitated with PaperTouch, including music instruments, educational dioramas, and playful products. By reflecting on this exploration, we uncovered the emerging design dimensions that considers the interactions, materiality, and embodiment of PaperTouch interfaces. We also surfaced the tacit know-how that we gained through our design process through annotations for others to refer to.