Centralized content moderation paradigm both falls short and overreaches: 1) it fails to account for the subjective nature of harm, and 2) it acts with blunt suppression in response to content deemed harmful, even when such content can be salvaged. We first investigate this through formative interviews, documenting how seemingly benign content becomes harmful due to individual life experiences. Based on these insights, we developed DIY-MOD, a browser extension that operationalizes a new paradigm: personalized content transformation. Operating on a user's own definition of harm, DIY-MOD transforms sensitive elements within content in real-time instead of suppressing the content itself. The system selects the most appropriate transformation for a piece of content from a diverse palette---from obfuscation to artistic stylizing---to match the user's specific needs while preserving the content's informational value. Our two user studies demonstrate that this approach increases users' sense of agency and safety, enabling them to engage with content and communities they previously needed to avoid.
Work is increasingly shifting away from traditional full-time jobs toward more fragmented ways of working, like gig work and part-time jobs. Yet, employment platforms like LinkedIn often privilege those with traditional credentials and work histories, presenting barriers to those who possess little experience translating informal experiences into a format that such tools expect. To address this gap, we propose a narrative-based approach that enables individuals to recognize transferable skills and practice articulating them verbally and in writing via a group discussion setting. Through a participatory design workshop held in a public housing community, we demonstrate how a cultural-probe and persona-inspired activity can elicit self-reflection, enabling individuals to communicate their strengths. While prior HCI research has highlighted the critical need for reflection in the job search process, little work has been done to facilitate this reflection and translation into employment profiles. Our work addresses this call and informs new design directions for employment technologies.
Self-disclosure is central to mental health, and chatbots are increasingly used to elicit it by lowering the risk of social judgment. With the rapid growth of voice-based chatbots, it is crucial to understand how their voice identity shapes self-disclosure, yet this relationship remains underexplored. We address this gap through a mixed-method study that combined a 14-day in-the-wild deployment (N = 61) with post-study interviews. Participants interacted daily with chatbots that spoke in one of three voices varying in social distance: their own, a family member's, or a stranger's. Findings show that chatbots using the user's own voice were rated as more attractive and sustained deeper levels of disclosure over time. Family voice chatbots prompted reflection on interpersonal relationships, where participants reported comfort in discussing some topics but reluctance in others. Together, these findings highlight voice identity as a key design lever for steering both the amount and focus of self-disclosure.
Analysing personal datasets has traditionally been limited to `Quantified Selfers' who commit significant effort into manually recording and analysing their data. However, the pool of Casual Users (CUs) who \textit{can} engage with their personal data is increasing due to the prevalence of companies passively collecting user interaction data.
In this paper, we execute an online survey exploring what kinds of information users seek about their music listening behaviour. We compare the information needs of CUs to identified Self-Trackers, using music listening as a lens to develop an information space.
The paper culminates in a provocation to broaden the audience of personal informatics by updating existing models of interaction to account for casual users, passive data, and episodic reflection.
Recent advances in foundation models have enabled conversational agents that aim for sustained companionship rather than mere task completion.
Yet most still remain unable to support natural, long-term companion-like interactions, resulting in experiences that feel episodic and inauthentic. We argue that current agents overlooked cross-temporal modeling of agents’ social behaviors and internal emotions: generated behaviors rarely influence an agent’s emotional state, and emotional states seldom shape subsequent behaviors.
We present Cross-Temporal Emotion Modeling (CTEM), a framework that links long-term behavioral history to moment-to-moment emotional expression. CTEM establishes a closed loop where past experiences update an evolving emotional state; this state conditions immediate interactions; and user feedback continually revises both memory and emotional state, enabling reflection and anticipation.
We instantiate CTEM as \textit{Auri}, a companion agent on an instant-messaging platform, and report a 21-day in-the-wild study showing that CTEM shows improvements in perceived naturalness, coherence, and emotional harmony.
Chatbots are increasingly applied to domains previously reserved for human actors. One such domain is comedy, whereby both the general public working with ChatGPT and research-based LLM-systems have tried their hands on making humor. In formative interviews with professional comedians and video analyses of stand-up comedy in humans, we found that human performers often use their ethnic, gender, community, and demographic-based identity to enable joke-making. This suggests whether the identity of AI itself can empower AI humor generation for human audiences. We designed a machine-identity-based agent that uses its own status as AI to tell jokes in online performance format. Studies with human audiences (N=32) showed that machine-identity-based agents were seen as funnier than baseline-GPT agent. This work suggests the design of human-AI integrated systems that explicitly utilize AI as its own unique identity apart from humans.
Recommender systems are central to contemporary music listening, yet their problematic behaviors remain underexplored from the perspective of everyday listeners. While prior research has addressed issues such as bias and diversity, less is known about how users themselves perceive and interpret these dynamics in relation to music discoverability. This paper reports on think-aloud interviews with 20 Italian digital-native listeners, who completed discovery-oriented tasks while reflecting on algorithmic recommendations. Thematic analysis revealed three recurring concerns: reinforcement of societal biases, commercial imperatives driving exposure, and confinement within narrow niches. These findings show how listeners actively develop folk theories of recommender behavior, highlighting a tension between algorithmic efficiency and cultural effects. We contribute empirical insights into user sensemaking of algorithmic harms, consolidate the use of the Think-Aloud Protocol as a user-driven auditing method, and outline design implications for more participatory and equitable music recommender systems.