注目の論文一覧

各カテゴリ上位30論文までを表示しています

The ACM CHI Conference on Human Factors in Computing Systems (https://chi2024.acm.org/)

5
Unlocking Understanding: An Investigation of Multimodal Communication in Virtual Reality Collaboration
Ryan Ghamandi (University of Central Florida, Orlando, Florida, United States)Ravi Kiran Kattoju (University of Central Florida, Orlando, Florida, United States)Yahya Hmaiti (University of Central Florida, Orlando, Florida, United States)Mykola Maslych (University of Central Florida, Orlando, Florida, United States)Eugene Matthew. Taranta (University of Central Florida, Orlando, Florida, United States)Ryan P. McMahan (University of Central Florida, Orlando, Florida, United States)Joseph LaViola (University of Central Florida, Orlando, Florida, United States)
Communication in collaboration, especially synchronous, remote communication, is crucial to the success of task-specific goals. Insufficient or excessive forms of communication may lead to detrimental effects on task performance while increasing mental fatigue. However, identifying which combinations of communication modalities provide the most efficient transfer of information in collaborative settings will greatly improve collaboration. To investigate this, we developed a remote, synchronous, asymmetric VR collaborative assembly task application, where users play the role of either mentor or mentee, and were exposed to different combinations of three communication modalities: voice, gestures, and gaze. Through task-based experiments with 25 pairs of participants (50 individuals), we evaluated quantitative and qualitative data and found that gaze did not differ significantly from multiple combinations of communication modalities. Our qualitative results indicate that mentees experienced more difficulty and frustration in completing tasks than mentors, with both types of users preferring all three modalities to be present.
4
Me, My Health, and My Watch: How Children with ADHD Understand Smartwatch Health Data
Elizabeth Ankrah (University of California, Irvine, Irvine, California, United States)Franceli L.. Cibrian (Chapman University, Orange, California, United States)Lucas M.. Silva (University of California, Irvine, Irvine, California, United States)Arya Tavakoulnia (University of California Irvine, Irvine, California, United States)Jesus Armando. Beltran (UCI, Irvine, California, United States)Sabrina Schuck (University of California Irvine, Irvine, California, United States)Kimberley D. Lakes (University of California Riverside, Riverside, California, United States)Gillian R. Hayes (University of California, Irvine, Irvine, California, United States)
Children with ADHD can experience a wide variety of challenges related to self-regulation, which can lead to poor educational, health, and wellness outcomes. Technological interventions, such as mobile and wearable health systems, can support data collection and reflection about health status. However, little is known about how ADHD children interpret such data. We conducted a deployment study with 10 children, aged 10 to 15, for six weeks, during which they used a smartwatch in their homes. Results from observations and interviews during this study indicate that children with ADHD can interpret their own health data, particularly at the moment. However, as ADHD children develop more autonomy, smartwatch systems may require alternatives for data reflection that are interpretable and actionable for them. This work contributes to the scholarly discourse around health data visualization, particularly in considering implications for the design of health technologies for children with ADHD.
4
The Social Journal: Investigating Technology to Support and Reflect on Social Interactions
Sophia Sakel (LMU Munich, Munich, Germany)Tabea Blenk (LMU Munich, Munich, Germany)Albrecht Schmidt (LMU Munich, Munich, Germany)Luke Haliburton (LMU Munich, Munich, Germany)
Social interaction is a crucial part of what it means to be human. Maintaining a healthy social life is strongly tied to positive outcomes for both physical and mental health. While we use personal informatics data to reflect on many aspects of our lives, technology-supported reflection for social interactions is currently under-explored. To address this, we first conducted an online survey (N=124) to understand how users want to be supported in their social interactions. Based on this, we designed and developed an app for users to track and reflect on their social interactions and deployed it in the wild for two weeks (N=25). Our results show that users are interested in tracking meaningful in-person interactions that are currently untraced and that an app can effectively support self-reflection on social interaction frequency and social load. We contribute insights and concrete design recommendations for technology-supported reflection for social interaction.
4
Predicting the Noticeability of Dynamic Virtual Elements in Virtual Reality
Zhipeng Li (Carnegie Mellon University, Pittsburgh, Pennsylvania, United States)Yi Fei Cheng (Carnegie Mellon University, Pittsburgh, Pennsylvania, United States)Yukang Yan (Carnegie Mellon University, Pittsburgh, Pennsylvania, United States)David Lindlbauer (Carnegie Mellon University, Pittsburgh, Pennsylvania, United States)
While Virtual Reality (VR) systems can present virtual elements such as notifications anywhere, designing them so they are not missed by or distracting to users is highly challenging for content creators. To address this challenge, we introduce a novel approach to predict the noticeability of virtual elements. It computes the visual saliency distribution of what users see, and analyzes the temporal changes of the distribution with respect to the dynamic virtual elements that are animated. The computed features serve as input for a long short-term memory (LSTM) model that predicts whether a virtual element will be noticed. Our approach is based on data collected from 24 users in different VR environments performing tasks such as watching a video or typing. We evaluate our approach (n = 12), and show that it can predict the timing of when users notice a change to a virtual element within 2.56 sec compared to a ground truth, and demonstrate the versatility of our approach with a set of applications. We believe that our predictive approach opens the path for computational design tools that assist VR content creators in creating interfaces that automatically adapt virtual elements based on noticeability.
4
DiaryMate: Understanding User Perceptions and Experience in Human-AI Collaboration for Personal Journaling
Taewan Kim (KAIST, Daejeon, Korea, Republic of)Donghoon Shin (University of Washington, Seattle, Washington, United States)Young-Ho Kim (NAVER AI Lab, Seongnam, Gyeonggi, Korea, Republic of)Hwajung Hong (KAIST, Deajeon, Korea, Republic of)
With their generative capabilities, large language models (LLMs) have transformed the role of technological writing assistants from simple editors to writing collaborators. Such a transition emphasizes the need for understanding user perception and experience, such as balancing user intent and the involvement of LLMs across various writing domains in designing writing assistants. In this study, we delve into the less explored domain of personal writing, focusing on the use of LLMs in introspective activities. Specifically, we designed DiaryMate, a system that assists users in journal writing with LLM. Through a 10-day field study (N=24), we observed that participants used the diverse sentences generated by the LLM to reflect on their past experiences from multiple perspectives. However, we also observed that they are over-relying on the LLM, often prioritizing its emotional expressions over their own. Drawing from these findings, we discuss design considerations when leveraging LLMs in a personal writing practice.
4
Observer Effect in Social Media Use
Koustuv Saha (University of Illinois at Urbana-Champaign, Urbana, Illinois, United States)Pranshu Gupta (Georgia Institute of Technology, Atlanta, Georgia, United States)Gloria Mark (University of California, Irvine, Irvine, California, United States)Emre Kiciman (Microsoft Research, Redmond, Washington, United States)Munmun De Choudhury (Georgia Institute of Technology, Atlanta, Georgia, United States)
While social media data is a valuable source for inferring human behavior, its in-practice utility hinges on extraneous factors. Notable is the ``observer effect,'' where awareness of being monitored can alter people's social media use. We present a causal-inference study to examine this phenomenon on the longitudinal Facebook use of 300+ participants who voluntarily shared their data spanning an average of 82 months before and 5 months after study enrollment. We measured deviation from participants' expected social media use through time series analyses. Individuals with high cognitive ability and low neuroticism decreased posting immediately after enrollment, and those with high openness increased posting. The sharing of self-focused content decreased, while diverse topics emerged. We situate the findings within theories of self-presentation and self-consciousness. We discuss the implications of correcting observer effect in social media data-driven measurements, and how this phenomenon shines light on the ethics of these measurements.
4
Personalizing Privacy Protection With Individuals' Regulatory Focus: Would You Preserve or Enhance Your Information Privacy?
Reza Ghaiumy Anaraky (New York University, New York City, New York, United States)Yao Li (University of Central Florida, Orlando, Florida, United States)Hichang Cho (National University of Singapore, Singapore, Singapore)Danny Yuxing Huang (New York University, New York, New York, United States)Kaileigh Angela Byrne (Clemson University, Clemson, South Carolina, United States)Bart Knijnenburg (Clemson University, Clemson, South Carolina, United States)Oded Nov (New York University, New York, New York, United States)
In this study, we explore the effectiveness of persuasive messages endorsing the adoption of a privacy protection technology (IoT Inspector) tailored to individuals' regulatory focus (promotion or prevention). We explore if and how regulatory fit (i.e., tuning the goal-pursuit mechanism to individuals' internal regulatory focus) can increase persuasion and adoption. We conducted a between-subject experiment (N = 236) presenting participants with the IoT Inspector in gain ("Privacy Enhancing Technology"---PET) or loss ("Privacy Preserving Technology"---PPT) framing. Results show that the effect of regulatory fit on adoption is mediated by trust and privacy calculus processes: prevention-focused users who read the PPT message trust the tool more. Furthermore, privacy calculus favors using the tool when promotion-focused individuals read the PET message. We discuss the contribution of understanding the cognitive mechanisms behind regulatory fit in privacy decision-making to support privacy protection.
4
Robot-Assisted Decision-Making: Unveiling the Role of Uncertainty Visualisation and Embodiment
Sarah Schömbs (The University of Melbourne, Melbourne, VIC, Australia)Saumya Pareek (University of Melbourne, Melbourne, Victoria, Australia)Jorge Goncalves (University of Melbourne, Melbourne, Australia)Wafa Johal (University of Melbourne, Melbourne, VIC, Australia)
Robots are embodied agents that act under several sources of uncertainty. When assisting humans in a collaborative task, robots need to communicate their uncertainty to help inform decisions. In this study, we examine the use of visualising a robot’s uncertainty in a high-stakes assisted decision-making task. In particular, we explore how different modalities of uncertainty visualisations (graphical display vs. the robot’s embodied behaviour) and confidence levels (low, high, 100%) conveyed by a robot affect the human decision-making and perception during a collaborative task. Our results show that these visualisations significantly impact how participants arrive to their decision as well as how they perceive the robot’s transparency across the different confidence levels. We highlight potential trade-offs and offer implications for robot-assisted decision-making. Our work contributes empirical insights on how humans make use of uncertainty visualisations conveyed by a robot in a critical robot-assisted decision-making scenario.
4
Signs of the Smart City: Exploring the Limits and Opportunities of Transparency
Eric Corbett (Google Research, New York, New York, United States)Graham Dove (New York University, New York, New York, United States)
This paper reports on a research through design (RtD) inquiry into public perceptions of transparency of Internet of Things (IoT) sensors increasingly deployed within urban neighborhoods as part of smart city programs. In particular, we report on the results of three participatory design workshops during which 40 New York City residents used physical signage as a medium for materializing transparency concerns about several sensors. We found that people’s concerns went beyond making sensors more transparent but instead sought to reveal the technology’s interconnected social, political, and economic processes. Building from these findings, we highlight the opportunities to move from treating transparency as an object to treating it as an ongoing activity. We argue that this move opens opportunities for designers and policy-makers to provide meaningful and actionable transparency of smart cities.
4
Using the Visual Language of Comics to Alter Sensations in Augmented Reality
Arpit Bhatia (University of Copenhagen, Copenhagen, Denmark)Henning Pohl (Aalborg University, Aalborg, Denmark)Teresa Hirzle (University of Copenhagen, Copenhagen, Denmark)Hasti Seifi (Arizona State University, Tempe, Arizona, United States)Kasper Hornbæk (University of Copenhagen, Copenhagen, Denmark)
Augmented Reality (AR) excels at altering what we see but non-visual sensations are difficult to augment. To augment non-visual sensations in AR, we draw on the visual language of comic books. Synthesizing comic studies, we create a design space describing how to use comic elements (e.g., onomatopoeia) to depict non-visual sensations (e.g., hearing). To demonstrate this design space, we built eight demos, such as speed lines to make a user think they are faster and smell lines to make a scent seem stronger. We evaluate these elements in a qualitative user study (N=20) where participants performed everyday tasks with comic elements added as augmentations. All participants stated feeling a change in perception for at least one sensation, with perceived changes detected by between four participants (touch) and 15 participants (hearing). The elements also had positive effects on emotion and user experience, even when participants did not feel changes in perception.
4
Tagnoo: Enabling Smart Room-Scale Environments with RFID-Augmented Plywood
Yuning Su (Simon Fraser University, Burnaby, British Columbia, Canada)Tingyu Zhang (Simon Fraser University, Burnaby, British Columbia, Canada)Jiuen Feng (University of Science and Technology of China, Hefei, Anhui, China)Yonghao Shi (Simon Fraser University, Burnaby, British Columbia, Canada)Xing-Dong Yang (Simon Fraser University, Burnaby, British Columbia, Canada)Te-Yen Wu (Florida State University, Tallahassee, Florida, United States)
Tagnoo is a computational plywood augmented with RFID tags, aimed at empowering woodworkers to effortlessly create room-scale smart environments. Unlike existing solutions, Tagnoo does not necessitate technical expertise or disrupt established woodworking routines. This battery-free and cost-effective solution seamlessly integrates computation capabilities into plywood, while preserving its original appearance and functionality. In this paper, we explore various parameters that can influence Tagnoo's sensing performance and woodworking compatibility through a series of experiments. Additionally, we demonstrate the construction of a small office environment, comprising a desk, chair, shelf, and floor, all crafted by an experienced woodworker using conventional tools such as a table saw and screws while adhering to established construction workflows. Our evaluation confirms that the smart environment can accurately recognize 18 daily objects and user activities, such as a user sitting on the floor or a glass lunchbox placed on the desk, with over 90% accuracy.
4
MOSion: Gaze Guidance with Motion-triggered Visual Cues by Mosaic Patterns
Arisa Kohtani (Tokyo Institute of Technology, Tokyo, Japan)Shio Miyafuji (Tokyo Institute of Technology, Tokyo, Japan)Keishiro Uragaki (Aoyama Gakuin University, Tokyo, Japan)Hidetaka Katsuyama (Tokyo Institute of Technology, Tokyo, Japan)Hideki Koike (Tokyo Institute of Technology, Tokyo, Japan)
We propose a gaze-guiding method called MOSion to adjust the guiding strength reacted to observers’ motion based on a high-speed projector and the afterimage effect in the human vision system. Our method decomposes the target area into mosaic patterns to embed visual cues in the perceived images. The patterns can only direct the attention of the moving observers to the target area. The stopping observer can see the original image with little distortion because of light integration in the visual perception. The pre computation of the patterns provides the adaptive guiding effect without tracking devices and computational costs depending on the movements. The evaluation and the user study show that the mosaic decomposition enhances the perceived saliency with a few visual artifacts, especially in moving conditions. Our method embedded in white lights works in various situations such as planar posters, advertisements, and curved objects.
3
MindfulDiary: Harnessing Large Language Model to Support Psychiatric Patients' Journaling
Taewan Kim (KAIST, Daejeon, Korea, Republic of)Seolyeong Bae (Gwangju Institute of Science and Technology, Gwangju, Korea, Republic of)Hyun AH Kim (NAVER Cloud, Gyeonggi-do, Korea, Republic of)Su-woo Lee (Wonkwang university hospital, iksan-si, Korea, Republic of)Hwajung Hong (KAIST, Deajeon, Korea, Republic of)Chanmo Yang (Wonkwang University Hospital, Wonkwang University, Iksan, Jeonbuk, Korea, Republic of)Young-Ho Kim (NAVER AI Lab, Seongnam, Gyeonggi, Korea, Republic of)
Large Language Models (LLMs) offer promising opportunities in mental health domains, although their inherent complexity and low controllability elicit concern regarding their applicability in clinical settings. We present MindfulDiary, an LLM-driven journaling app that helps psychiatric patients document daily experiences through conversation. Designed in collaboration with mental health professionals, MindfulDiary takes a state-based approach to safely comply with the experts' guidelines while carrying on free-form conversations. Through a four-week field study involving 28 patients with major depressive disorder and five psychiatrists, we examined how MindfulDiary facilitates patients' journaling practice and clinical care. The study revealed that MindfulDiary supported patients in consistently enriching their daily records and helped clinicians better empathize with their patients through an understanding of their thoughts and daily contexts. Drawing on these findings, we discuss the implications of leveraging LLMs in the mental health domain, bridging the technical feasibility and their integration into clinical settings.
3
Investigating Contextual Notifications to Drive Self-Monitoring in mHealth Apps for Weight Maintenance
Yu-Peng Chen (University of Florida, Gainesville, Florida, United States)Julia Woodward (University of South Florida , Tampa, Florida, United States)Dinank Bista (University of Florida, Gainesville, Florida, United States)Xuanpu Zhang (Department of CISE, University of Florida, Gainesville, Florida, United States)Ishvina Singh (University of Florida , Gainesville, Florida, United States)Oluwatomisin Obajemu (University of Florida, Gainesville, Florida, United States)Meena N. Shankar (University of Florida, Gainesville, Florida, United States)Kathryn M.. Ross (University of Florida, Gainesville, Florida, United States)Jaime Ruiz (University of Florida, Gainesville, Florida, United States)Lisa Anthony (University of Florida, Gainesville, Florida, United States)
Mobile health applications for weight maintenance offer self-monitoring as a tool to empower users to achieve health goals (e.g., losing weight); yet maintaining consistent self-monitoring over time proves challenging for users. These apps use push notifications to help increase users’ app engagement and reduce long-term attrition, but they are often ignored by users due to appearing at inopportune moments. Therefore, we analyzed whether delivering push notifications based on time alone or also considering user context (e.g., current activity) affected users’ engagement in a weight maintenance app, in a 4-week in-the-wild study with 30 participants. We found no difference in participants’ overall (across the day) self-monitoring frequency between the two conditions, but in the context-based condition, participants responded faster and more frequently to notifications, and logged their data more timely (as eating/exercising occurs). Our work informs the design of notifications in weight maintenance apps to improve their efficacy in promoting self-monitoring.
3
Mnemosyne - Supporting Reminiscence for Individuals with Dementia in Residential Care Settings
Andrea Baumann (Lancaster University, Lancaster, United Kingdom)Peter Shaw (Lancaster University, Lancaster, United Kingdom)Ludwig Trotter (Lancaster University, Lancaster, Lancashire, United Kingdom)Sarah Clinch (The University of Manchester, Manchester, United Kingdom)Nigel Davies (Lancaster University, Lancaster, United Kingdom)
Reminiscence is known to play an important part in helping to mitigate the effects of dementia. Within the HCI community, work has typically focused on supporting reminiscence at an individual or social level but less attention has been given to supporting reminiscence in residential care settings. This lack of research became particularly apparent during the COVID pandemic when traditional forms of reminiscence involving physical artefacts and face-to-face interactions became especially challenging. In this paper we report on the design, development and evaluation of a reminiscence system, deployed in a residential care home over a two-year-period that included the pandemic. Mnemosyne comprises a pervasive display network and a browser-based application whose adoption and use we explored using a mixed methods approach. Our findings offer insights that will help shape the development and evaluation of future systems, particularly those that use pervasive displays to support unsupervised reminiscence.
3
A Robot Jumping the Queue: Expectations About Politeness and Power During Conflicts in Everyday Human-Robot Encounters
Franziska Babel (Linköping University, Linköping, Sweden)Robin Welsch (Aalto University, Espoo, Finland)Linda Miller (Ulm University, Ulm, Germany)Philipp Hock (Linköping University, Linköping, Sweden)Sam Thellman (Linköping University, Linköping, Sweden)Tom Ziemke (Linköping University, Linköping, Sweden)
Increasing encounters between people and autonomous service robots may lead to conflicts due to mismatches between human expectations and robot behaviour. This interactive online study (N = 335) investigated human-robot interactions at an elevator, focusing on the effect of communication and behavioural expectations on participants' acceptance and compliance. Participants evaluated a humanoid delivery robot primed as either submissive or assertive. The robot either matched or violated these expectations by using a command or appeal to ask for priority and then entering either first or waiting for the next ride. The results highlight that robots are less accepted if they violate expectations by entering first or using a command. Interactions were more effective if participants expected an assertive robot which then asked politely for priority and entered first. The findings emphasize the importance of power expectations in human-robot conflicts for the robot's evaluation and effectiveness in everyday situations.
3
Visual Noise Cancellation: Exploring Visual Discomfort and Opportunities for Vision Augmentations
Junlei Hong (University of Otago, Dunedin, New Zealand)Tobias Langlotz (University of Otago, Dunedin, New Zealand)Jonathan Sutton (University of Otago, Dunedin, New Zealand)Holger Regenbrecht (University of Otago, Dunedin, Otago, New Zealand)
Acoustic noise control or cancellation (ANC) is a commonplace component of modern audio headphones. ANC aims to actively mitigate disturbing environmental noise for a quieter and improved listening experience. ANC is digitally controlling frequency and amplitude characteristics of sound. Much less explored is visual noise and active visual noise control, which we address here. We first explore visual noise and scenarios in which visual noise arises based on findings from four workshops we conducted. We then introduce the concept of visual noise cancellation (VNC) and how it can be used to reduce identified effects of visual noise. In addition, we developed head-worn demonstration prototypes to practically explore the concept of active VNC with selected scenarios in a user study. Finally, we discuss the application of VNC, including vision augmentations that moderate the user's view of the environment to address perceptual needs and to provide augmented reality content.
3
Technology-Mediated Non-pharmacological Interventions for Dementia: Needs for and Challenges in Professional, Personalized and Multi-Stakeholder Collaborative Interventions
Yuling Sun (East China Normal University, Shanghai, China)Zhennan Yi (Beijing Normal University, Beijing, China)Xiaojuan Ma (Hong Kong University of Science and Technology, Hong Kong, Hong Kong)JUNYAN MAO (East China Normal University, Shanghai, China)Xin Tong (Duke Kunshan University, Kunshan, Suzhou, China)
Designing and using technologies to support Non-Pharmacological Interventions (NPI) for People with Dementia (PwD) has drawn increasing attention in HCI, with the potential expectations of higher user engagement and positive outcomes. Yet, technologies for NPI can only be valuable if practitioners successfully incorporate them into their ongoing intervention practices beyond a limited research period. Currently, we know little about how practitioners experience and perceive these technologies in practical NPI for PwD. In this paper, we investigate this question through observations of five in-person NPI activities and interviews with 11 therapists and 5 caregivers. Our findings elaborate the practical NPI workflow process and characteristics, and practitioners’ attitudes, experiences, and perceptions to technology-mediated NPI in practice. Generally, our participants emphasized practical NPI is a complex and professional practice, needing fine-grained, personalized evaluation and planning, and the practical executing process is situated, and multi-stakeholder collaborative. Yet, existing technologies often fail to consider these specific characteristics, which leads to limitations in practical effectiveness or sustainable use. Drawing on our findings, we discuss the possible implications for designing more useful and practical NPI intervention technologies.
3
"It's a Fair Game", or Is It? Examining How Users Navigate Disclosure Risks and Benefits When Using LLM-Based Conversational Agents
Zhiping Zhang (Khoury College of Computer Sciences, Boston, Massachusetts, United States)Michelle Jia (Carnegie Mellon University, Pittsburgh, Pennsylvania, United States)Hao-Ping (Hank) Lee (Carnegie Mellon University, Pittsburgh, Pennsylvania, United States)Bingsheng Yao (Rensselaer Polytechnic Institute, Troy, New York, United States)Sauvik Das (Carnegie Mellon University, Pittsburgh, Pennsylvania, United States)Ada Lerner (Northeastern University, Boston, Massachusetts, United States)Dakuo Wang (Northeastern University, Boston, Massachusetts, United States)Tianshi Li (Northeastern University, Boston, Massachusetts, United States)
The widespread use of Large Language Model (LLM)-based conversational agents (CAs), especially in high-stakes domains, raises many privacy concerns. Building ethical LLM-based CAs that respect user privacy requires an in-depth understanding of the privacy risks that concern users the most. However, existing research, primarily model-centered, does not provide insight into users' perspectives. To bridge this gap, we analyzed sensitive disclosures in real-world ChatGPT conversations and conducted semi-structured interviews with 19 LLM-based CA users. We found that users are constantly faced with trade-offs between privacy, utility, and convenience when using LLM-based CAs. However, users' erroneous mental models and the dark patterns in system design limited their awareness and comprehension of the privacy risks. Additionally, the human-like interactions encouraged more sensitive disclosures, which complicated users' ability to navigate the trade-offs. We discuss practical design guidelines and the needs for paradigm shifts to protect the privacy of LLM-based CA users.
3
Decide Yourself or Delegate - User Preferences Regarding the Autonomy of Personal Privacy Assistants in Private IoT-Equipped Environments
Karola Marky (Ruhr-University Bochum, Bochum, Germany)Alina Stöver (Technische Universität Darmstadt, Darmstadt, Germany)Sarah Prange (University of the Bundeswehr Munich, Munich, Germany)Kira Bleck (TU Darmstadt, Darmstadt, Germany)Paul Gerber (Technische Universität Darmstadt, Darmstadt, Germany)Verena Zimmermann (ETH Zürich, Zürich, Switzerland)Florian Müller (LMU Munich, Munich, Germany)Florian Alt (University of the Bundeswehr Munich, Munich, Germany)Max Mühlhäuser (TU Darmstadt, Darmstadt, Germany)
Personalized privacy assistants (PPAs) communicate privacy-related decisions of their users to Internet of Things (IoT) devices. There are different ways to implement PPAs by varying the degree of autonomy or decision model. This paper investigates user perceptions of PPA autonomy models and privacy profiles - archetypes of individual privacy needs - as a basis for PPA decisions in private environments (e.g., a friend's home). We first explore how privacy profiles can be assigned to users and propose an assignment method. Next, we investigate user perceptions in 18 usage scenarios with varying contexts, data types and number of decisions in a study with 1126 participants. We found considerable differences between the profiles in settings with few decisions. If the number of decisions gets high (> 1/h), participants exclusively preferred fully autonomous PPAs. Finally, we discuss implications and recommendations for designing scalable PPAs that serve as privacy interfaces for future IoT devices.
3
Metaphors in Voice User Interfaces: A Slippery Fish
Smit Desai (University of Illinois, Urbana-Champaign, Champaign, Illinois, United States)Michael Bernard. Twidale (University of Illinois at Urbana-Champaign, Urbana, Illinois, United States)
We explore a range of different metaphors used for Voice User Interfaces (VUIs) by designers, end-users, manufacturers, and researchers using a novel framework derived from semi-structured interviews and a literature review. We focus less on the well-established idea of metaphors as a way for interface designers to help novice users learn how to interact with novel technology, and more on other ways metaphors can be used. We find that metaphors people use are contextually fluid, can change with the mode of conversation, and can reveal differences in how people perceive VUIs compared to other devices. Not all metaphors are helpful, and some may be offensive. Analyzing this broader class of metaphors can help understand, perhaps even predict problems. Metaphor analysis can be a low-cost tool to inspire design creativity and facilitate complex discussions about sociotechnical issues, enabling us to spot potential opportunities and problems in the situated use of technologies.
3
Understanding Users' Interaction with Login Notifications
Philipp Markert (Ruhr University Bochum, Bochum, Germany)Leona Lassak (Ruhr University Bochum, Bochum, Germany)Maximilian Golla (CISPA Helmholtz Center for Information Security, Saarbrücken, Germany)Markus Dürmuth (Leibniz University Hannover, Hannover, Germany)
Login notifications intend to inform users about sign-ins and help them protect their accounts from unauthorized access. Notifications are usually sent if a login deviates from previous ones, potentially indicating malicious activity. They contain information like the location, date, time, and device used to sign in. Users are challenged to verify whether they recognize the login (because it was them or someone they know) or to protect their account from unwanted access. In a user study, we explore users' comprehension, reactions, and expectations of login notifications. We utilize two treatments to measure users' behavior in response to notifications sent for a login they initiated or based on a malicious actor relying on statistical sign-in information. We find that users identify legitimate logins but need more support to halt malicious sign-ins. We discuss the identified problems and give recommendations for service providers to ensure usable and secure logins for everyone.
2
Uncovering and Addressing Blink-Related Challenges in Using Eye Tracking for Interactive Systems
Jesse W. Grootjen (LMU Munich, Munich, Germany)Henrike Weingärtner (LMU Munich, Munich , Germany)Sven Mayer (LMU Munich, Munich, Germany)
Currently, interactive systems use physiological sensing to enable advanced functionalities. While eye tracking is a promising means to understand the user, eye tracking data inherently suffers from missing data due to blinks, which may result in reduced system performance. We conducted a literature review to understand how researchers deal with this issue. We uncovered that researchers often implemented their use-case-specific pipeline to overcome the issue, ranging from ignoring missing data to artificial interpolation. With these first insights, we run a large-scale analysis on 11 publicly available datasets to understand the impact of the various approaches on data quality and accuracy. By this, we highlight the pitfalls in data processing and which methods work best. Based on our results, we provide guidelines for handling eye tracking data for interactive systems. Further, we propose a standard data processing pipeline that allows researchers and practitioners to pre-process and standardize their data efficiently.
2
A Systematic Review and Meta-analysis of the Effectiveness of Body Ownership Illusions in Virtual Reality
Aske Mottelson (IT University of Copenhagen, Copenhagen, Denmark)Andreea Muresan (University of Copenhagen, Copenhagen, Denmark)Kasper Hornbæk (University of Copenhagen, Copenhagen, Denmark)Guido Makransky (University of Copenhagen, Copenhagen, Denmark)
Body ownership illusions (BOIs) occur when participants experience that their actual body is replaced by a body shown in virtual reality (VR). Based on a systematic review of the cumulative evidence on BOIs from 111 research articles published in 2010 to 2021, this article summarizes the findings of empirical studies of BOIs. Following the PRISMA guidelines, the review points to diverse experimental practices for inducing and measuring body ownership. The two major components of embodiment measurement, body ownership and agency, are examined. The embodiment of virtual avatars generally leads to modest body ownership and slightly higher agency. We also find that BOI research lacks statistical power and standardization across tasks, measurement instruments, and analysis approaches. Furthermore, the reviewed studies showed a lack of clarity in fundamental terminology, constructs, and theoretical underpinnings. These issues restrict scientific advances on the major components of BOIs, and together impede scientific rigor and theory-building.
2
Spatial Gaze Markers: Supporting Effective Task Switching in Augmented Reality
Mathias N.. Lystbæk (Aarhus University, Aarhus, Denmark)Ken Pfeuffer (Aarhus University, Aarhus, Denmark)Tobias Langlotz (University of Otago, Dunedin, New Zealand)Jens Emil Sloth. Grønbæk (Aarhus University, Aarhus, Denmark)Hans Gellersen (Lancaster University, Lancaster, United Kingdom)
Task switching can occur frequently in daily routines with physical activity. In this paper, we introduce Spatial Gaze Markers, an augmented reality tool to support users in immediately returning to the last point of interest after an attention shift. The tool is task-agnostic, using only eye-tracking information to infer distinct points of visual attention and to mark the corresponding area in the physical environment. We present a user study that evaluates the effectiveness of Spatial Gaze Markers in simulated physical repair and inspection tasks against a no-marker baseline. The results give insights into how Spatial Gaze Markers affect user performance, task load, and experience of users with varying levels of task type and distractions. Our work is relevant to assist physical workers with simple AR techniques and render task switching faster with less effort.
2
Designing Haptic Feedback for Sequential Gestural Inputs
Shan Xu (Meta, Redmond, Washington, United States)Sarah Sykes (Meta, Redmond, Washington, United States)Parastoo Abtahi (Meta, Toronto, Ontario, Canada)Tovi Grossman (University of Toronto, Toronto, Ontario, Canada)Daylon Walden (Meta, Redmond, Washington, United States)Michael Glueck (Meta, Toronto, Ontario, Canada)Carine Rognon (Meta, Redmond, Washington, United States)
This work seeks to design and evaluate haptic feedback for sequential gestural inputs, where mid-air hand gestures are used to express system commands. Nine haptic patterns are first designed leveraging metaphors. To pursue efficient interaction, we examine the trade-off between pattern duration and recognition accuracy and find that durations as short as 0.3s-0.5s achieve roughly 80\%-90\% accuracy. We then examine the haptic design for sequential inputs, where we vary when the feedback for each gesture is provided, along with pattern duration, gesture sequence length, and age. Results show that providing haptic patterns right after detected hand gestures leads to significantly more efficient interaction compared with concatenating all haptic patterns after the gesture sequence. Moreover, the number of gestures had little impact on performance, but age is a significant predictor. Our results suggest that immediate feedback with 0.3s and 0.5s pattern duration would be recommended for younger and older users respectively.
2
ARCADIA: A Gamified Mixed Reality System for Emotional Regulation and Self-Compassion
José Luis Soler-Domínguez (Instituto Tecnológico de Informática, Valencia, Spain)Samuel Navas-Medrano (Instituto Tecnológico de Informática, Valencia, Spain)Patricia Pons (Instituto Tecnológico de Informática, Valencia, Spain)
Mental health and wellbeing have become one of the significant challenges in global society, for which emotional regulation strategies hold the potential to offer a transversal approach to addressing them. However, the persistently declining adherence of patients to therapeutic interventions, coupled with the limited applicability of current technological interventions across diverse individuals and diagnoses, underscores the need for innovative solutions. We present ARCADIA, a Mixed-Reality platform strategically co-designed with therapists to enhance emotional regulation and self-compassion. ARCADIA comprises several gamified therapeutic activities, with a strong emphasis on fostering patient motivation. Through a dual study involving therapists and mental health patients, we validate the fully functional prototype of ARCADIA. Encouraging results are observed in terms of system usability, user engagement, and therapeutic potential. These findings lead us to believe that the combination of Mixed Reality and gamified therapeutic activities could be a significant tool in the future of mental health.
2
Understanding User Acceptance of Electrical Muscle Stimulation in Human-Computer Interaction
Sarah Faltaous (University Duisburg-Essen , Essen, Germany)Julie R.. Williamson (University of Glasgow, Glasgow, United Kingdom)Marion Koelle (OFFIS - Institute for Information Technology, Oldenburg, Germany)Max Pfeiffer (Aldi Sued, Muelheim a.d.R., NRW, Germany)Jonas Keppel (University of Duisburg-Essen, Essen, Germany)Stefan Schneegass (University of Duisburg-Essen, Essen, NRW, Germany)
Electrical Muscle Stimulation (EMS) has unique capabilities that can manipulate users' actions or perceptions, such as actuating user movement while walking, changing the perceived texture of food, and guiding movements for a user learning an instrument. These applications highlight the potential utility of EMS, but such benefits may be lost if users reject EMS. To investigate user acceptance of EMS, we conducted an online survey (N=101). We compared eight scenarios, six from HCI research applications and two from the sports and health domain. To gain further insights, we conducted in-depth interviews with a subset of the survey respondents (N=10). The results point to the challenges and potential of EMS regarding social and technological acceptance, showing that there is greater acceptance of applications that manipulate action than those that manipulate perception. The interviews revealed safety concerns and user expectations for the design and functionality of future EMS applications.
2
Sweating the Details: Emotion Recognition and the Influence of Physical Exertion in Virtual Reality Exergaming
Dominic Potts (University of Bath, Bath, United Kingdom)Zoe Broad (University of Bath, Bath, United Kingdom)Tarini Sehgal (University of Bath , Bath, United Kingdom)Joseph Hartley (University of Bath, Bath, United Kingdom)Eamonn O'Neill (University of Bath, Bath, United Kingdom)Crescent Jicol (University of Bath, Bath, United Kingdom)Christopher Clarke (University of Bath, Bath, United Kingdom)Christof Lutteroth (University of Bath, Bath, United Kingdom)
There is great potential for adapting Virtual Reality (VR) exergames based on a user's affective state. However, physical activity and VR interfere with physiological sensors, making affect recognition challenging. We conducted a study (n=72) in which users experienced four emotion inducing VR exergaming environments (happiness, sadness, stress and calmness) at three different levels of exertion (low, medium, high). We collected physiological measures through pupillometry, electrodermal activity, heart rate, and facial tracking, as well as subjective affect ratings. Our validated virtual environments, data, and analyses are openly available. We found that the level of exertion influences the way affect can be recognised, as well as affect itself. Furthermore, our results highlight the importance of data cleaning to account for environmental and interpersonal factors interfering with physiological measures. The results shed light on the relationships between physiological measures and affective states and inform design choices about sensors and data cleaning approaches for affective VR.
2
LegacySphere: Facilitating Intergenerational Communication Through Perspective-Taking and Storytelling in Embodied VR
Chenxinran Shen (University of British Columbia, Vancouver, British Columbia, Canada)Joanna McGrenere (University of British Columbia, Vancouver, British Columbia, Canada)Dongwook Yoon (University of British Columbia, Vancouver, British Columbia, Canada)
Intergenerational communication can enhance well-being and family cohesion, but stereotypes and low empathy can be barriers to achieving effective communication. VR perspective-taking is a potential approach that is known to enhance understanding and empathy toward others by allowing a user to take another's viewpoint. In this study, we introduce LegacySphere, a novel VR perspective-taking experience leveraging the combination of embodiment, role-play, and storytelling. To explore LegacySphere's design and impact, we conducted an observational study involving five dyads with a one-generation gap. We found that LegacySphere promotes empathetic and reflexive intergenerational dialogue. Specifically, avatar embodiment encourages what we term "relationship cushioning,'' fostering a trustful, open environment for genuine communications. The blending of real and embodied identities prompts insightful questions, merging both perspectives. The experience also nurtures a sense of unity and stimulates reflections on aging. Our work highlights the potential of immersive technologies for enhancing empathetic intergenerational relationships.
2
Narrating Fitness: Leveraging Large Language Models for Reflective Fitness Tracker Data Interpretation
Konstantin R.. Strömel (Osnabrück University, Osnabrück, Germany)Stanislas Henry (ENSEIRB-MATMECA Bordeaux, Bordeaux, France)Tim Johansson (Chalmers University of Technology, Gothenburg, Sweden)Jasmin Niess (University of Oslo, Oslo, Norway)Paweł W. Woźniak (Chalmers University of Technology, Gothenburg, Sweden)
While fitness trackers generate and present quantitative data, past research suggests that users often conceptualise their wellbeing in qualitative terms. This discrepancy between numeric data and personal wellbeing perception may limit the effectiveness of personal informatics tools in encouraging meaningful engagement with one’s wellbeing. In this work, we aim to bridge the gap between raw numeric metrics and users’ qualitative perceptions of wellbeing. In an online survey with $n=273$ participants, we used step data from fitness trackers and compared three presentation formats: standard charts, qualitative descriptions generated by an LLM (Large Language Model), and a combination of both. Our findings reveal that users experienced more reflection, focused attention and reward when presented with the generated qualitative data compared to the standard charts alone. Our work demonstrates how automatically generated data descriptions can effectively complement numeric fitness data, fostering a richer, more reflective engagement with personal wellbeing information.
1
Authors' Values and Attitudes Towards AI-bridged Scalable Personalization of Creative Language Arts
Taewook Kim (Northwestern University, Evanston, Illinois, United States)Hyomin Han (Northwestern University, Evanston, Illinois, United States)Eytan Adar (University of Michigan, Ann Arbor, Michigan, United States)Matthew Kay (Northwestern University, Chicago, Illinois, United States)John Joon Young. Chung (University of Michigan, Ann Arbor, Michigan, United States)
Generative AI has the potential to create a new form of interactive media: AI-bridged creative language arts (CLA), which bridge the author and audience by personalizing the author's vision to the audience's context and taste at scale. However, it is unclear what the authors' values and attitudes would be regarding AI-bridged CLA. To identify these values and attitudes, we conducted an interview study with 18 authors across eight genres (e.g., poetry, comics) by presenting speculative but realistic AI-bridged CLA scenarios. We identified three benefits derived from the dynamics between author, artifact, and audience: those that 1) authors get from the process, 2) audiences get from the artifact, and 3) authors get from the audience. We found how AI-bridged CLA would either promote or reduce these benefits, along with authors' concerns. We hope our investigation hints at how AI can provide intriguing experiences to CLA audiences while promoting authors' values.
1
‘A Teaspoon of Authenticity’: Exploring How Young Adults BeReal on Social Media
Ananya Reddy (Pennsylvania State University , University Park, Pennsylvania, United States)Priya C.. Kumar (Pennsylvania State University, University Park, Pennsylvania, United States)
BeReal is the latest social media platform to tout itself as a more authentic space for connection. The app notifies users at a random time each day and gives users two minutes to post an image from their smartphone’s front- and back-facing cameras. While prior work has theorized authenticity on social media and studied how various user populations enact authenticity, more research is needed to understand whether and how specific design features afford authenticity. We conducted a walkthrough of the BeReal app and interviewed 31 young adults about their experiences on BeReal. We found that participants approached authenticity in two ways—as extemporaneous interaction and as comprehensive self-presentation—and that BeReal harnesses the affordances of visibility, editability, availability, and persistence in a way that enables the former more than the latter. Based on our findings, we offer four recommendations for designers and researchers who seek to support authenticity online.
1
CRTypist: Simulating Touchscreen Typing Behavior via Computational Rationality
Danqing Shi (Aalto University, Helsinki, Finland)Yujun Zhu (Aalto university, Espoo, Finland)Jussi P. P.. Jokinen (University of Jyväskylä, Jyväskylä, Finland)Aditya Acharya (University of Birmingham, Birmingham, West Midlands, United Kingdom)Aini Putkonen (Aalto University, Helsinki, Finland)Shumin Zhai (Google, Mountain View, California, United States)Antti Oulasvirta (Aalto University, Helsinki, Finland)
Touchscreen typing requires coordinating the fingers and visual attention for button-pressing, proofreading, and error correction. Computational models need to account for the associated fast pace, coordination issues, and closed-loop nature of this control problem, which is further complicated by the immense variety of keyboards and users. The paper introduces CRTypist, which generates human-like typing behavior. Its key feature is a reformulation of the supervisory control problem, with the visual attention and motor system being controlled with reference to a working memory representation tracking the text typed thus far. Movement policy is assumed to asymptotically approach optimal performance in line with cognitive and design-related bounds. This flexible model works directly from pixels, without requiring hand-crafted feature engineering for keyboards. It aligns with human data in terms of movements and performance, covers individual differences, and can generalize to diverse keyboard designs. Though limited to skilled typists, the model generates useful estimates of the typing performance achievable under various conditions.
1
How Gaze Visualization Facilitates Initiation of Informal Communication in 3D Virtual Spaces
Junko Ichino (Tokyo City University, Yokohama, Japan)Masahiro Ide (Tokyo City University, Yokohama, Japan)Takehito Yoshiki (TIS Inc., Shinjuku, Tokyo, Japan)Hitomi Yokoyama (Okayama University of Science, Okayama, Japan)Hirotoshi Asano (Kogakuin University, Shinjyuku, Tokyo, Japan)Hideo Miyachi (Tokyo City University, Yokohama, Japan)daisuke okabe (Tokyo City University, Yokohama, Kanagawa, Japan)
This study explores how gaze visualization in virtual spaces facilitates the initiation of informal communication. Three styles of gaze cue visualization (arrow, bubbles, and miniature avatar) with two types of gaze behavior (one-sided gaze and joint gaze) were evaluated. 96 participants used either a non-visualized gaze cue or one of the three visualized gaze cues. The results showed that all visualized gaze cues facilitated the initiation of informal communication more effectively than the non-visualized gaze cue. For one-sided gaze, overall, bubbles had more positive effects on the gaze receiver’s behaviors and experiences than the other two visualized gaze cues, although the only statistically significant difference was in the verbal reaction rates. For joint gaze, all three visualized gaze cues had positive effects on the receiver’s behaviors and experiences. The design implications of the gaze visualization and the confederate-based evaluation method contribute to research on informal communication and social virtual reality.
1
SoniWeight Shoes: Investigating Effects and Personalization of a Wearable Sound Device for Altering Body Perception, Behavior and Emotion
Amar D'Adamo (Universidad Carlos III de Madrid, Madrid, Spain)Marte Roel Lesur (Universidad Carlos III de Madrid, Madrid, Spain)Laia Turmo Vidal (Universidad Carlos III de Madrid, Madrid, Spain)Mohammad Mahdi Dehshibi (Universidad Carlos III de Madrid, Madrid, Spain)Daniel De La Prida (Universidad Carlos III de Madrid, Madrid, Spain)Joaquin R.. Diaz Duran (Universidad Carlos III de Madrid, Madrid, Spain)Luis Antonio Azpicueta-Ruiz (Universidad Carlos III de Madrid, Madrid, Spain)Aleksander Väljamäe (University of Tartu, Tartu, Estonia)Ana Tajadura-Jiménez (Universidad Carlos III de Madrid, Leganés, Madrid, Spain)
Changes in body perception influence behavior and emotion and can be induced through multisensory feedback. Auditory feedback to one's actions can trigger such alterations; however, it is unclear which individual factors modulate these effects. We employ and evaluate SoniWeight Shoes, a wearable device based on literature for altering one's weight perception through manipulated footstep sounds. In a healthy population sample across a spectrum of individuals (n=84) with varying degrees of eating disorder symptomatology, physical activity levels, body concerns, and mental imagery capacities, we explore the effects of three sound conditions (low-frequency, high-frequency and control) on extensive body perception measures (demographic, behavioral, physiological, psychological, and subjective). Analyses revealed an impact of individual differences in each of these dimensions. Besides replicating previous findings, we reveal and highlight the role of individual differences in body perception, offering avenues for personalized sonification strategies. Datasets, technical refinements, and novel body map quantification tools are provided.
1
QuadStretcher: A Forearm-Worn Skin Stretch Display for Bare-Hand Interaction in AR/VR
Taejun Kim (School of Computing, KAIST, Daejeon, Korea, Republic of)Youngbo Aram. Shim (KAIST, Daejeon, Korea, Republic of)YoungIn Kim (School of Computing, KAIST, Daejeon, Korea, Republic of)Sunbum Kim (School of Computing, KAIST, Daejeon, Korea, Republic of)Jaeyeon Lee (UNIST, Ulsan, Korea, Republic of)Geehyuk Lee (School of Computing, KAIST, Daejeon, Korea, Republic of)
The paradigm of bare-hand interaction has become increasingly prevalent in Augmented Reality (AR) and Virtual Reality (VR) environments, propelled by advancements in hand tracking technology. However, a significant challenge arises in delivering haptic feedback to users’ hands, due to the necessity for the hands to remain bare. In response to this challenge, recent research has proposed an indirect solution of providing haptic feedback to the forearm. In this work, we present QuadStretcher, a skin stretch display featuring four independently controlled stretching units surrounding the forearm. While achieving rich haptic expression, our device also eliminates the need for a grounding base on the forearm by using a pair of counteracting tactors, thereby reducing bulkiness. To assess the effectiveness of QuadStretcher in facilitating immersive barehand experiences, we conducted a comparative user evaluation (n = 20) with a baseline solution, Squeezer. The results confirmed that QuadStretcher outperformed Squeezer in terms of expressing force direction and heightening the sense of realism, particularly in 3-DoF VR interactions such as pulling a rubber band, hooking a fishing rod, and swinging a tennis racket. We further discuss the design insights gained from qualitative user interviews, presenting key takeaways for future forearm-haptic systems aimed at advancing AR/VR bare-hand experiences.
1
Exploring the Impact of Interconnected External Interfaces in Autonomous Vehicles on Pedestrian Safety and Experience
Tram Thi Minh. Tran (School of Architecture, Design and Planning, The University of Sydney, Sydney, NSW, Australia)Callum Parker (University of Sydney, Sydney, NSW, Australia)Marius Hoggenmüller (School of Architecture, Design and Planning, The University of Sydney, Sydney, NSW, Australia)Yiyuan Wang (The University of Sydney, Sydney, Australia)Martin Tomitsch (University of Technology Sydney, Sydney, NSW, Australia)
Policymakers advocate for the use of external Human-Machine Interfaces (eHMIs) to allow autonomous vehicles (AVs) to communicate their intentions or status. Nonetheless, scalability concerns in complex traffic scenarios arise, such as potentially increasing pedestrian cognitive load or conveying contradictory signals. Building upon precursory works, our study explores 'interconnected eHMIs,' where multiple AV interfaces are interconnected to provide pedestrians with clear and unified information. In a virtual reality study (N=32), we assessed the effectiveness of this concept in improving pedestrian safety and their crossing experience. We compared these results against two conditions: no eHMIs and unconnected eHMIs. Results indicated interconnected eHMIs enhanced safety feelings and encouraged cautious crossings. However, certain design elements, such as the use of the colour red, led to confusion and discomfort. Prior knowledge slightly influenced perceptions of interconnected eHMIs, underscoring the need for refined user education. We conclude with practical implications and future eHMI design research directions.
1
E-Acrylic: Electronic-Acrylic Composites for Making Interactive Artifacts
Bo Han (National University of Singapore, Singapore, Singapore)Xin Liu (National University of Singapore, Singapore, Singapore)Ching Chiuan Yen (National University of Singapore, Singapore, Singapore, Singapore)Clement Zheng (National University of Singapore, Singapore, Singapore)
Electronic composites incorporate computing into physical materials, expanding the materiality of interactive systems for designers. In this research, we investigated acrylic as a substrate for electronics. Acrylic is valued for its visual and structural properties and is used widely in industrial design. We propose e-acrylic, an electronic composite that incorporates electronic circuits with acrylic sheets. Our approach to making this composite is centered on acrylic making practices that industrial designers are familiar with. We outline this approach systematically, including leveraging laser cutting to embed circuits into acrylic sheets, as well as different ways to shape e-acrylic into 3D objects. With this approach, we explored using e-acrylic to design interactive artifacts. We reflect on these applications to surface a design space of tangible interactive artifacts possible with this composite. We also discuss the implications of aligning electronics to an existing making practice, and working with the holistic materiality that e-acrylic embodies.
1
AI-Augmented Brainwriting: Investigating the use of LLMs in group ideation
Orit Shaer (Wellesley College, Wellesley, Massachusetts, United States)Angelora Cooper (Wellesley College, Wellesley, Massachusetts, United States)Osnat Mokryn (University of Haifa, Haifa, Israel)Andrew L. Kun (University of New Hampshire, Durham, New Hampshire, United States)Hagit Ben Shoshan (University of Haifa, Haifa, Israel)
The growing availability of generative AI technologies such as large language models (LLMs) has significant implications for creative work. This paper explores twofold aspects of integrating LLMs into the creative process – the divergence stage of idea generation, and the convergence stage of evaluation and selection of ideas. We devised a collaborative group-AI Brainwriting ideation framework, which incorporated an LLM as an enhancement into the group ideation process, and evaluated the idea generation process and the resulted solution space. To assess the potential of using LLMs in the idea evaluation process, we design an evaluation engine and compared it to idea ratings assigned by three expert and six novice evaluators. Our findings suggest that integrating LLM in Brainwriting could enhance both the ideation process and its outcome. We also provide evidence that LLMs can support idea evaluation. We conclude by discussing implications for HCI education and practice.
1
TimeTunnel: Integrating Spatial and Temporal Motion Editing for Character Animation in Virtual Reality
Qian Zhou (Autodesk Research, Toronto, Ontario, Canada)David Ledo (Autodesk Research, Toronto, Ontario, Canada)George Fitzmaurice (Autodesk Research, Toronto, Ontario, Canada)Fraser Anderson (Autodesk Research, Toronto, Ontario, Canada)
Editing character motion in Virtual Reality is challenging as it requires working with both spatial and temporal data using controls with multiple degrees of freedom. The spatial and temporal controls are separated, making it difficult to adjust poses over time and predict the effects across adjacent frames. To address this challenge, we propose TimeTunnel, an immersive motion editing interface that integrates spatial and temporal control for 3D character animation in VR. TimeTunnel provides an approachable editing experience via KeyPoses and Trajectories. KeyPoses are a set of representative poses automatically computed to concisely depict motion. Trajectories are 3D animation curves that pass through the joints of KeyPoses to represent in-betweens. TimeTunnel integrates spatial and temporal control by superimposing Trajectories and KeyPoses onto a 3D character. We conducted two studies to evaluate TimeTunnel. In our quantitative study, TimeTunnel reduced the amount of time required for editing motion, and saved effort in locating target poses. Our qualitative study with domain experts demonstrated how TimeTunnel is an approachable interface that can simplify motion editing, while still preserving a direct representation of motion.
1
PaperTouch: Tangible Interfaces through Paper Craft and Touchscreen Devices
Qian Ye (National University of Singapore, Singapore, Singapore)Zhen Zhou Yong (National University of Singapore, Singapore, Singapore)Bo Han (National University of Singapore, Singapore, Singapore)Ching Chiuan Yen (National University of Singapore, Singapore, Singapore, Singapore)Clement Zheng (National University of Singapore, Singapore, Singapore)
Paper and touchscreen devices are two common objects found around us, and we investigated the potential of their intersection for tangible interface design. In this research, we developed PaperTouch, an approach to design paper based mechanisms that translate a variety of physical interactions to touch events on a capacitive touchscreen. These mechanisms act as switches that close during interaction, connecting the touchscreen to the device’s ground bus. To develop PaperTouch, we explored different types of paper along with the making process around them. We also built a range of applications to showcase different tangible interfaces facilitated with PaperTouch, including music instruments, educational dioramas, and playful products. By reflecting on this exploration, we uncovered the emerging design dimensions that considers the interactions, materiality, and embodiment of PaperTouch interfaces. We also surfaced the tacit know-how that we gained through our design process through annotations for others to refer to.
1
Designing for Human Operations on the Moon: Challenges and Opportunities of Navigational HUD Interfaces
Leonie Bensch (German Aerospace Center (DLR), Cologne, North Rhine-Westphalia, Germany)Tommy Nilsson (European Space Agency (ESA), Cologne, -, Germany)Jan Wulkop (German Aerospace Center (DLR), Braunschweig, Germany)Paul Demedeiros (European Space Agency, Cologne, North Rhine-Westphalia, Germany)Nicolas Daniel. Herzberger (Fraunhofer FKIE, Aachen, Germany)Michael Preutenborbeck (RWTH Aachen University, Aachen, Germany)Andreas Gerndt (German Aerospace Center (DLR), Braunschweig, Germany)Frank Flemisch (RWTH Aachen University, Aachen, Germany)Florian Dufresne (Arts et Métiers Institute of Technology, F-53810 CHANGE, France)Georgia Albuquerque (German Aerospace Center, Braunschweig, Germany)Aidan Cowley (European Space Agency, Cologne, North Rhine-Westphalia, Germany)
Future crewed missions to the Moon will face significant environmental and operational challenges, posing risks to the safety and performance of astronauts navigating its inhospitable surface. Whilst head-up displays (HUDs) have proven effective in providing intuitive navigational support on Earth, the design of novel human-spaceflight solutions typically relies on costly and time-consuming analogue deployments, leaving the potential use of lunar HUDs largely under-explored. This paper explores an alternative approach by simulating navigational HUD concepts in a high-fidelity Virtual Reality (VR) representation of the lunar environment. In evaluating these concepts with astronauts and other aerospace experts (n=25), our mixed methods study demonstrates the efficacy of simulated analogues in facilitating rapid design assessments of early-stage HUD solutions. We illustrate this by elaborating key design challenges and guidelines for future lunar HUDs. In reflecting on the limitations of our approach, we propose directions for future design exploration of human-machine interfaces for the Moon.
1
Augmented Reality Cues Facilitate Task Resumption after Interruptions in Computer-Based and Physical Tasks
Kilian L. Bahnsen (Julius-Maximilians-Universität Würzburg, Würzburg, Germany)Lucas Tiemann (Julius-Maximilians-Universität Würzburg, Würzburg, Germany)Lucas Plabst (Julius-Maximilians-University Würzburg, Würzburg, Germany)Tobias Grundgeiger (Julius-Maximilians-Universität Würzburg, Würzburg, Germany)
Many work domains include numerous interruptions, which can contribute to errors. We investigated the potential of augmented reality (AR) cues to facilitate primary task resumption after interruptions of varying lengths. Experiment 1 (N = 83) involved a computer-based primary task with a red AR arrow at the to-be-resumed task step which was placed via a gesture by the participants or automatically. Compared to no cue, both cues significantly reduced the resumption lag (i.e., the time between the end of the interruption and the resumption of the primary task) following long but not short interruptions. Experiment 2 (N = 38) involved a tangible sorting task, utilizing only the automatic cue. The AR cue facilitated task resumption compared to not cue after both short and long interruptions. We demonstrated the potential of AR cues in mitigating the negative effects of interruptions and make suggestions for integrating AR technologies for task resumption.
1
Blended Whiteboard: Physicality and Reconfigurability in Remote Mixed Reality Collaboration
Jens Emil Sloth. Grønbæk (Aarhus University, Aarhus, Denmark)Juan Sánchez Esquivel (Aarhus University, Aarhus, Denmark)Germán Leiva (Aarhus University, Aarhus, Denmark)Eduardo Velloso (University of Melbourne, Melbourne, Victoria, Australia)Hans Gellersen (Lancaster University, Lancaster, United Kingdom)Ken Pfeuffer (Aarhus University, Aarhus, Denmark)
The whiteboard is essential for collaborative work. To preserve its physicality in remote collaboration, Mixed Reality (MR) can blend real whiteboards across distributed spaces. Going beyond reality, MR can further enable interactions like panning and zooming in a virtually reconfigurable infinite whiteboard. However, this reconfigurability conflicts with the sense of physicality. To address this tension, we introduce Blended Whiteboard, a remote collaborative MR system enabling reconfigurable surface blending across distributed physical whiteboards. Blended Whiteboard supports a unique collaboration style, where users can sketch on their local whiteboards but also reconfigure the blended space to facilitate transitions between loosely and tightly coupled work. We describe design principles inspired by proxemics; supporting users in changing between facing each other and being side-by-side, and switching between navigating the whiteboard synchronously and independently. Our work shows exciting benefits and challenges of combining physicality and reconfigurability in the design of distributed MR whiteboards.
1
ShareYourReality: Investigating Haptic Feedback and Agency in Virtual Avatar Co-embodiment
Karthikeya Puttur Venkatraj (Centrum Wiskunde & Informatica, Amsterdam, Netherlands)Wo Meijer (TU Delft, Delft, Netherlands)Monica Perusquia-Hernandez (Nara Institute of Science and Technolgy, Ikoma-shi, Nara, Japan)Gijs Huisman (Delft University of Technology, Delft, Netherlands)Abdallah El Ali (Centrum Wiskunde & Informatica (CWI), Amsterdam, Netherlands)
Virtual co-embodiment enables two users to share a single avatar in Virtual Reality (VR). During such experiences, the illusion of shared motion control can break during joint-action activities, highlighting the need for position-aware feedback mechanisms. Drawing on the perceptual crossing paradigm, we explore how haptics can enable non-verbal coordination between co-embodied participants. In a within-subjects study (20 participant pairs), we examined the effects of vibrotactile haptic feedback (None, Present) and avatar control distribution (25-75%, 50-50%, 75-25%) across two VR reaching tasks (Targeted, Free-choice) on participants’ Sense of Agency (SoA), co-presence, body ownership, and motion synchrony. We found (a) lower SoA in the free-choice with haptics than without, (b) higher SoA during the shared targeted task, (c) co-presence and body ownership were significantly higher in the free-choice task, (d) players’ hand motions synchronized more in the targeted task. We provide cautionary considerations when including haptic feedback mechanisms for avatar co-embodiment experiences.
1
Designing Multispecies Worlds for Robots, Cats, and Humans
Eike Schneiders (University of Nottingham, Nottingham, United Kingdom)Steven David. Benford (University of Nottingham, Nottingham, United Kingdom)Alan Chamberlain (University of Nottingham, Nottingham, United Kingdom)Clara Mancini (The Open University, Milton Keynes, United Kingdom)Simon D. Castle-Green (University of Nottingham, Nottingham, Nottinghamshire, United Kingdom)Victor Zhi Heung. Ngo (University of Nottingham, Nottingham, United Kingdom)Ju Row Farr (Blast Theory, Brighton, United Kingdom)Matt Adams (Blast Theory, Brighton, United Kingdom)Nick Tandavanitj (Blast Theory, Brighton, United Kingdom)Joel E. Fischer (University of Nottingham, Nottingham, United Kingdom)
We reflect on the design of a multispecies world centred around a bespoke enclosure in which three cats and a robot arm coexist for six hours a day during a twelve-day installation as part of an artist-led project. In this paper, we present the project's design process, encompassing various interconnected components, including the cats, the robot and its autonomous systems, the custom end-effectors and robot attachments, the diverse roles of the humans-in-the-loop, and the custom-designed enclosure. Subsequently, we provide a detailed account of key moments during the deployment and discuss the design implications for future multispecies systems. Specifically, we argue that designing the technology and its interactions is not sufficient, but that it is equally important to consider the design of the `world' in which the technology operates. Finally, we highlight the necessity of human involvement in areas such as breakdown recovery, animal welfare, and their role as audience.
1
Design Principles for Generative AI Applications
Justin D.. Weisz (IBM Research AI, Yorktown Heights, New York, United States)Jessica He (IBM Research, Yorktown Heights, New York, United States)Michael Muller (IBM Research, Cambridge, Massachusetts, United States)Gabriela Hoefer (IBM, New York, New York, United States)Rachel Miles (IBM Software, San Jose, California, United States)Werner Geyer (IBM Research, Cambridge, Massachusetts, United States)
Generative AI applications present unique design challenges. As generative AI technologies are increasingly being incorporated into mainstream applications, there is an urgent need for guidance on how to design user experiences that foster effective and safe use. We present six principles for the design of generative AI applications that address unique characteristics of generative AI UX and offer new interpretations and extensions of known issues in the design of AI applications. Each principle is coupled with a set of design strategies for implementing that principle via UX capabilities or through the design process. The principles and strategies were developed through an iterative process involving literature review, feedback from design practitioners, validation against real-world generative AI applications, and incorporation into the design process of two generative AI applications. We anticipate the principles to usefully inform the design of generative AI applications by driving actionable design recommendations.
1
How does Juicy Game Feedback Motivate? Testing Curiosity, Competence, and Effectance
Dominic Kao (Purdue University, West Lafayette, Indiana, United States)Nick Ballou (Queen Mary University of London, London, United Kingdom)Kathrin Gerling (KIT, Karlsruhe, Germany)Heiko Breitsohl (University of Klagenfurt, Klagenfurt, Austria)Sebastian Deterding (Imperial College, London, United Kingdom)
'Juicy' or immediate abundant action feedback is widely held to make video games enjoyable and intrinsically motivating. Yet we do not know why it works: Which motives are mediating it? Which features afford it? In a pre-registered (n=1,699) online experiment, we tested three motives mapping prior practitioner discourse---effectance, competence, and curiosity---and connected design features. Using a dedicated action RPG and a 2x2+control design, we varied feedback amplification, success-dependence, and variability and recorded self-reported effectance, competence, curiosity, and enjoyment as well as free-choice playtime. Structural equation models show curiosity as the strongest enjoyment and only playtime predictor and support theorised competence pathways. Success dependence enhanced all motives, while amplification unexpectedly reduced them, possibly because the tested condition unintentionally impeded players' sense of agency. Our study evidences uncertain success affording curiosity as an underappreciated moment-to-moment engagement driver, directly supports competence-related theory, and suggests that prior juicy game feel guidance ties to legible action-outcome bindings and graded success as preconditions of positive 'low-level' user experience.
1
Ecothreads: Prototyping Biodegradable E-textiles Through Thread-based Fabrication
Jingwen Zhu (Cornell University, Ithaca, New York, United States)Lily Winagle (Cornell University, Ithaca, New York, United States)Cindy Hsin-Liu Kao (Cornell University, Ithaca, New York, United States)
We present EcoThreads, a sustainable e-textile prototyping approach for fabricating biodegradable functional threads. We synthesized two thread-based fabrication methods, wet spinning and thread coating, to fabricate functional threads from biomaterials or modify natural fiber to achieve conductive or interactive functionality. We built a wet spinning tool from a modified DIY syringe pump to spin biodegradable conductive threads. The conductive and interactive threads can be further integrated into textiles through weaving, knitting, embroidery, and braiding. We conducted a workshop study inviting e-textile practitioners to use the materials to fabricate e-textile swatches for transient use cases. The EcoThreads approach presents a path for individual creators to incorporate biodegradable material choices toward sustainable e-textile practices.