The process of leaving high-pressure, identity-defining communities can produce profound identity changes. This leaving process propels some people to seek support online and to share their experiences publicly. We interviewed 13 social media content creators who made content as a part of, or in response to, their leaving process to understand their motivations and the ways audiences engaged with their work. We then explored how platforms transformed creators' work into collaborative spaces for social support. As creators gained audiences, their visibility introduced new incentives, obligations, and risks. Creators had to manage the challenges of maintaining safe spaces for their audiences, meeting audience expectations, and addressing heightened safety concerns for themselves. We end by discussing the networked structure of creator-centered communities, the impacts of platform on creator communities, and the emotional harms associated with being at the center of a community focused on social support.
Social video platforms such as YouTube and Twitch increasingly moderate noncompliant video content, yet we know little about what psychological factors drive creators to produce such videos. Drawing on theories of self-control and moral disengagement, we examine how self-control and moral disengagement influence creators' production of noncompliant videos, their moderation experience, and, once moderated, their perceived fairness of moderation decisions and their coping strategies. By analyzing data from a survey with 400 video creators, we find that moral disengagement increases the creation of noncompliant videos and moderation experience, while self-control reduces them. Self-control and moral disengagement influence creators’ adoption of coping strategies in response to moderation decisions, but in distinct ways. The effects of moral disengagement and self-control are moderated by creators' reliance on video creation for income. These findings offer a fuller account of why creators offend. We discuss implications for better supporting punished creators’ behavioral improvement.
Social media platforms generate personalized annual recaps presenting algorithmically-curated summaries of users’ online activities. Unlike traditional personal informatics where users actively collect data, these recaps present unsolicited insights demanding sensemaking effort. Through interviews with 20 participants and analysis of annual recaps, we investigated how users make sense of and reflect on these presentations. We identified seven data presentation types and five sensemaking activities facilitating different reflection levels. We found that concrete presentations like extreme details serve as foundational anchors across all levels, while more abstract presentations predominantly prompt critical reflection. Sensemaking activities lead to reflection through four paths: descriptive reflection involves scanning and annotation, dialogic reflection requires explanation-seeking activities, transformative reflection involves comprehensive sensemaking processes with emphasis on verification, while critical reflection can emerge from any path. We contribute theoretical bridges between sensemaking and reflection in personal informatics and provide design implications to support sensemaking and reflection on personal data.
Crypto Key Opinion Leaders (KOLs) shape Web3 narratives and retail investment behaviour. In volatile, high-risk markets, their credibility becomes a key determinant of their influence on followers. Yet prior research has focused on lifestyle influencers or generic financial commentary, leaving crypto KOLs' understandings of motivation, credibility, and responsibility underexplored. Drawing on interviews with 13 KOLs and self-determination theory (SDT), we examine how psychological needs are negotiated alongside monetisation and community expectations. Whereas prior work treats finfluencer credibility as a set of static credentials, our findings reveal it to be a \emph{self-determined, ethically enacted practice}. We identify four community-recognised markers of credibility: self-regulation, bounded epistemic competence, accountability, and reflexive self-correction. This reframes credibility as socio-technical performance, extending SDT into high-risk crypto ecosystems. Methodologically, we employ a hybrid human--LLM thematic analysis. The study surfaces implications for designing credibility signals that prioritise transparency over hype.
Music is increasingly performed and experienced in Social Virtual Reality (Social VR), from VRChat raves to high-production concerts on bespoke platforms. Yet Human-Computer-Interaction (HCI) research still focuses mainly on building new VR systems rather than examining the communities that already create and sustain these practices. We present a cultural mapping of the Social VR music scene based on 84 survey responses, 27 interviews, and 17 event observations with diverse stakeholders, including audience members, musicians, developers, platform owners, and event organisers. We found that the scene operates as a fragmented cross-platform ecosystem sustained by user-generated infrastructure and continuous community labour. The bottom-up organisation produces role fluidity with individuals dynamically shifting between roles as performers, world builders, organisers, and audience members. However, the openness that enables this creativity also creates tensions between expectations of free access and the financial and emotional labour required to keep events running. Taken together, our findings reveal the vibrant cultural practices that continue to flourish in Social VR, even as corporate narratives declare the ``metaverse'' dead.
Unlike conventional social AI agents, AI streamers are multi-modal artificial intelligence systems that engage in autonomous, real-time social interactions with audiences in a dynamic and public online space. Through a qualitative thematic analysis of 1891 comments on a YouTube channel of Neuro-sama, an exemplar of popular AI streamers, we reveal that AI streamers enhance viewers’ experiences through their unique personality development, behavioral autonomy, and nuanced AI-creator relationships; yet they also raise concerns about emotional damage, problematic training data, and heightened moderation challenges in real-time streaming environments. We contribute to HCI at the unique intersection of AI for social needs and live streaming research by highlighting how AI streamers reshape live streaming practices through innovating its creative content creation, novel streamer identity practices, and rich streamer-audience interaction mechanisms. We also propose three design principles for strengthening AI streamers’ social and creative affordances while mitigating identified risks, which inform broader AI agent designs in public online social spaces.
Fairness is a recurring challenge in grassroots digital infrastructures, where collective action depends on volunteer contributions. This paper presents a study of Foodsharing.de, a grassroots FOSS platform with 185,000 members rescuing and redistributing surplus food. Drawing on 25 interviews and long-term activist involvement, we analyze two justice-oriented features: the Cherry-Picking Rule (distributional fairness) and Commitment Statistics (contributional fairness). We show how these fairness features become deeply entangled in practice and how they operate as policy-in-code, inscribing fairness logics into software and redistributing not only food and labor but also authority within the community. Rather than settling questions of justice, these interventions trigger renewed negotiation across deliberative spaces and everyday coordination, as encoded rules are interpreted, contested, and adapted. Building on these dynamics, we outline governance directions for justice-oriented grassroots infrastructures, highlighting the need for contestability and accountable autonomy to sustain negotiation and align technical change with community legitimacy.