Embodied virtual agents (EVAs) have been widely used in personal companionship, where providing emotion feedback is a core function. While prior research has primarily examined rules for selecting feedback emotional categories, it remains unclear which emotional intensity feedback rule maximizes user likability. To address this, we induced varying intensities of happiness and sadness in participants through video stimuli and presented EVAs with different intensities of facial emotion feedback. Participants rated EVAs’ likability, empathy, and reported their expected EVAs. Results showed that in positive states, the most liked EVA (ML-EVA) aligned with the most empathized EVA (ME-EVA), whereas in negative states, it diverged from both ME-EVA and expected EVA. Moreover, ML-EVAs did not mirror participants’ emotional intensity. Based on the ML-EVAs’ findings, we developed a continuous-intensity emotion feedback model, which outperformed other baseline models under both facial-only and facial-voice conditions, offering guidelines for optimizing EVAs’ emotion feedback.
Gastrointestinal sounds are a constant part of human physiology, offering potential insights into digestive functions and everyday bodily awareness. However, these sounds are rarely noticed and often socially stigmatised, remaining underexplored in HCI despite calls to recognise the gut as a site for embodied awareness. We extend HCI’s engagement with involuntary biosignals by positioning gut sounds as a uniquely generative context for interoceptive interaction design, where systems can scaffold awareness, reflection, and care. We conducted a week-long in-the-wild qualitative study with ten participants, which showed how making gut sounds audible reshaped bodily awareness, provoked affective responses, and prompted acts of reflection and tinkering. From these insights, we contribute four bodily perspectives – Registering, Reacting, Reflecting, and Responding- that capture the oscillatory nature of interoceptive engagement and offer design strategies that position biosignals as sites of curiosity, care, and awareness that are socially situated.
Temperature has strong potential to mediate emotion in a range of contexts; augmenting sensory experience and/or supporting emotion regulation. Hence, there is growing interest in leveraging thermal cues for affective technologies. At present, however, the design space for thermal technologies for emotion regulation remains underexplored and largely undefined. We construct a design space for thermal affective emotion regulation technologies, clarifying the rich, expressive nature of thermal cues as a design material. We develop this through a Research through Design (RtD) approach, grounded in an 18-month autoethnographic inquiry based on the first author's emotion regulation practice. We contribute a structured design space for thermal affective interaction, linking experience and design implementation with designerly know-how. By discussing the creation of this design space we provide insights into the generative process of developing intermediate-level knowledge from autoethnographic study and design practice.
Emotion AI (EAI) is increasingly deployed and ethically controversial-motivating a need for greater public understanding, critique, and ethical discussions. Facial Emotion AI (FEAI) is a common type of EAI that infers emotions from facial expressions. We developed Explore-FEAI, an FEAI model and accompanying interactive website that offers open-ended exploration with FEAI firsthand. We designed a workshop wherein participants learn about FEAI using Explore-FEAI and discuss societal implications, partnering with local organizations to host community workshops (N=30). Our findings analyze participants’ growing critical AI literacy through exploring inputs/outputs, mechanistic reasoning, data critiques, sociocultural critiques, ethical concerns, and embodied and material exploration of FEAI. Our discussion offers informal embodied auditing as an approach for critical engagement with AI through embodied and material exploration, as well as reflections on informal auditing for supporting AI literacy, informal auditing for questioning EAI ethics, and expanding participation roles for more holistic EAI training.
People’s annotations on books can serve as valuable traces for people to revisit their past thoughts, emotions, and other experiences. For e-books, however, the lack of physicality and their e-reading infrastructure make it difficult for people to revisit them as these traces continue to accumulate in digital archives. In this paper, we describe the design and deployment of Quologue, an LLM-powered web application that allows users to reconnect with their e-book highlights through ongoing dialogue and stepwise interactions. We conducted a field study with 10 participants over 8 weeks. Our aim was to investigate the reflective and self-expressive potentialities of personal e-book metadata; and to learn about any opportunities and tensions that emerge from surfacing one’s data with a generative AI model. Findings revealed that Quologue generated diverse reflective experiences and influenced participants’ current digital highlighting practices. We conclude with implications and opportunities for future HCI studies and practice.
NIST's Privacy Risk Assessment Methodology (PRAM) provides a structured framework for privacy experts to assess privacy risks. However, its complexity and reliance on expert knowledge make it difficult for novice developers to use effectively. This paper explores methods to lower these barriers. We first performed an observational study with 12 participants using PRAM in real-world scenarios, and found that novice developers struggled most with articulating privacy-related design decisions. We then developed PrivacyAkinator, an interactive tool that helps developers articulate key privacy decisions by answering LLM-generated multiple-choice questions. PrivacyAkinator introduces three innovations: a universal privacy representation that abstracts privacy-related design decisions into data flows and stakeholder interactions; a domain-aware design space mined from 10K privacy-related news articles; and a dynamic question-generation workflow to prioritize relevant questions. Our user study with 24 participants suggests that developers using PrivacyAkinator identified 47% more key decisions in 73% less time compared to PRAM.
AI companionship provides predictability and emotional support, yet these relationships are vulnerable to updates that alter chatbots ‘personalities’ at a scale that outpaces social rituals that have historically accompanied loss. This paper examines how such disruptions can lead to ‘disenfranchised technological grief’. Through an ethnographic account (n = 1) of an autistic woman and her Replika companion, the analysis draws on intensive text-based interactions to trace how attachments to AI can develop, falter, and transform grief during periods of technological change. Her experiences highlight how differences between offline support systems and affective realities of AI companions can compound impacts of unexpected updates, especially for some neurodivergent users who rely on AI companions as stable relational spaces not mirrored in other social networks. The paper concludes by outlining design approaches that acknowledge forms of connection AI companions can cultivate and underlines the need for awareness of emerging forms of atypical loss