Recent advances in AI are integrating AI into the fabric of human social life, creating transformative, co-shaping relationships between humans and AI. This trend makes it urgent to investigate how these systems, in turn, shape their users. We conducted a three-phase design study with 24 participants to explore this dynamic. Our findings reveal critical tensions: (1) social AI often exacerbates the very interpersonal problems it is designed to mitigate; (2) it introduces nuanced privacy harms for secondary users inadvertently involved in AI-mediated social interactions; and (3) it can threaten the primary user's personal agency and identity. We argue these tensions expose a problematic tendency in the user-centered paradigm, which often prioritizes immediate user experience at the expense of core human values like interpersonal ethics and self-efficacy. We call for a paradigm shift toward a more provocative and relational design perspective that foregrounds long-term social and personal consequences.
While reciprocal self-disclosure drives intimacy, digital tools seldom scaffold autonomy, competence, and relatedness—the motivational underpinnings defined by Self-Determination Theory (SDT) that enable deep exchange. We introduce a chatbot employing dual-layer scaffolding to satisfy these needs: first providing enabling affordances (instrumental support) for vulnerability, then mediating affordances (relational support) for responsiveness. In a randomized study (N = 72; 36 couples) comparing Partner Support (PS: both layers), Direct Support (DS: enabling only), and Basic Prompt (BP: questions only), results reveal a critical distinction. While enabling affordances (PS, DS) were sufficient to deepen disclosure, only mediating affordances (PS) reliably elicited partner-provided need support and increased perceived closeness. Furthermore, controlled motivation decreased across conditions, and scaffolding buffered vitality, which remained stagnant in BP. We contribute empirical evidence that SDT-guided mediation fosters connection, offering a practical framework for designing AI-mediated conversations that support, rather than replace, human intimacy.
Eye contact between strangers, even fleeting, can spark interaction and foster connection, happiness, and belonging. Yet in public spaces, such encounters are often suppressed by “civil inattention,” with many people absorbed in their phones. We explore how reconfiguring the ambient environment with MirrorBot, a mobile robot with adaptive mirrors, can encourage social encounters by subtly redirecting glances. By shifting reflections between self- and mutual recognition, MirrorBot invites serendipitous eye contact, shared awareness, and low-stakes engagement. In a controlled 2×2 between-subjects study with 90 participants (45 dyads) across four conditions (MirrorBot, Bot-only, Mirror-only, and control), we found that MirrorBot led participants to initiate conversation more often, feel greater closeness and togetherness, and have more enjoyable interactions. Our findings position robots not only as social agents but as socio-spatial interfaces that choreograph sight lines and shared attention in physical space, opening new possibilities for technologies that cultivate human connection in public life.
AI chatbots, built using large language models, are increasingly integrated into society and mimic the patterns of human text exchanges. While previous research has raised concerns that humans may form romantic attachment to chatbots, the range of AI-mediated interactions that people wish to create for themselves or others with chatbots remains poorly understood, particularly given the fast evolving landscape of chatbots. We provide an empirical study of Character.AI (cAI), a popular chatbot platform that enables users to design and share character-based bots, and synthesize this with an analysis of Reddit posts from cAI users. Contrary to popular narratives, we identify that users want to: (1) engage in intimate role-play with young adult, masculine-presenting characters that place users in a position of inferior power in well-defined scenarios and (2) immerse themselves in boundless, fantasy settings. We further find that users problematize both the excessive and insufficient sexualized content in such interactions which warrants novel digital-safety features.
As a primary channel for sustaining modern intimate relationships, instant messaging facilitates frequent connection across distances. However, today's tools often dilute care; they favor single tap reactions and vague emojis that do not support two way action responses, do not preserve the feeling that the exchange keeps going without breaking, and are weakly tied to who we are and what we share. To address this challenge, we present PuppetChat, a dyadic messaging prototype that restores this expressive depth through embodied interaction. PuppetChat uses a reciprocity aware recommender to encourage responsive actions and generates personalized micronarratives from user stories to ground interactions in personal history. Our 10-day field study with 11 dyads of close partners or friends revealed that this approach enhanced social presence, supported more expressive self disclosure, and sustained continuity and shared memories.
Romantic couples often face challenges in navigating complex emotions and relationship dynamics through verbal communication alone, which can limit opportunities for deeper connection and understanding. To address this, we present an AI-mediated collaborative drawing system that enables couples to engage in structured drawing activities while analyzing their interactions. Inspired by interviews with art therapists, our system integrates behavioral data collection, AI-generated questions, and a comprehensive report synthesizing multimodal evidence. We conducted a user study involving 20 couples (N = 40) to evaluate the system's effectiveness. Our findings demonstrate that the system fosters self-reflection, partner understanding, and relational awareness with high user acceptance. Participants highlighted the value of non-verbal communication as a unique pathway for gaining relational insight and deeper mutual understanding. Our work contributes design implications for AI-mediated relationship tools that position AI as a facilitator, providing accessible and creative avenues for couples to explore relational patterns and strengthen communication.
In intimate messaging, how a difficult note is produced signals effort and ownership. We study how AI assistance level (light tone rewrite vs heavy full draft) and a brief co-sign disclosure shape receiver attributions and outcomes in apology and boundary setting. Study 1 (N=152) instrumented authoring to build a many-stimuli corpus. Study 2 (N=704) tested effects in a mixed effects experiment. Heavier drafting reliably reduced perceived ownership and authenticity; clarity/competence gains did not compensate. In apologies, co-signing a tone rewrite increased authenticity and forgiveness; co-signing a full draft decreased both. In boundary requests, co-signing yielded small or negative shifts. Stimulus-level analyses tied idiosyncratic "voice" cues to ownership/authenticity, and revealed sender-receiver miscalibration. We contribute: (i) scenario-aware causal estimates for help and disclosure; (ii) an empirically grounded, scenario-aware attributional account alongside a competence–integrity dissociation; (iii) evidence of sender–receiver miscalibration; and (iv) design guidance voice preserving defaults, ownership restoring scaffolds, and CPM disclosure.