Decentralised social media (DSM) platforms such as Mastodon offer community-governed alternatives to corporate social networks but place substantial governance burdens on volunteer operators. As interest grows in applying artificial intelligence (AI) to support this work, little is known about whether DSM operators want AI, what roles they consider appropriate, and what governance boundaries they require. We conducted semi-structured interviews with 20 operators across Mastodon, Pixelfed, PeerTube, Lemmy, Pleroma, and Funkwhale, using generative feature probes and speculative scenarios to explore their perceptions of AI. Operators rejected AI as an autonomous actor, instead envisioning it as governance infrastructure that provides contextual intelligence, supports cross-instance coordination, and sustains community and moderator well-being. They also articulated strict boundaries rooted in DSM values, including human accountability, reversibility, transparency, community-centred configuration, and strong data-governance constraints. We contribute empirical insights and design implications for AI compatible with decentralised, federated social media.
Social media platforms are increasingly adopting features that display crowdsourced context alongside posts, a technique pioneered by X's Community Notes.
These systems---which we term \textit{Crowdsourced Context Systems} (CCS)---have the potential to reshape the information ecosystem as major platforms embrace them as alternatives to professional fact-checking.
To understand the features and implications of these systems, we conduct a systematic literature review of existing CCS research (n=56) and analyze real-world CCS implementations.
Based on our analysis, we develop a framework with two components.
First, we present a theoretical model to conceptualize and define CCS.
Second, we identify a design space encompassing six aspects: participation, inputs, curation, presentation, platform treatment, and transparency.
We also surface normative implications of different CCS design and implementation choices.
Our work integrates theoretical, design, and ethical perspectives to establish a foundation for future human-centered research on Crowdsourced Context Systems.
Platform workers often experience isolation in their work. They use online forums to connect, but their moderation remains underexplored. Article 20 of the EU Platform Work Directive requires digital labour platforms to provide workers with a “communication channel” and leaves interpretation for how to design it up to the platforms. To inform this issue, we qualitatively analyse community rules and moderator comments across 28 worker subreddits. We show how moderators work to reduce harms such as racism and doxxing, cultivate their community through curation, and decide whether to enforce or resist work platform policy. The discussion presents implications for design for worker communication channels. The channels should be spaces with independent moderation and data protection-by-design that enable workers to safely build collective knowledge without fear of platform monitoring. Future work should follow implementations during transposition and test which governance and interface choices produce trust and capacity for collective action. Our contribution is to surface the governance dimension of worker communication and to translate these insights into design implications for future channels.
Unused online accounts (“zombie accounts”) pose avoidable privacy and security risks by retaining personal data that may be exposed in breaches. Yet, little is known about when and how to effectively prompt users to delete them. This work investigates the challenges users encounter when attempting to delete zombie accounts.
We conducted two online studies with U.S. participants via Prolific: the accounts study (N = 120) to identify common zombie account categories, and the challenges study (N = 100) to examine users’ motivations, perceived abilities, and preferred moments for deletion. Participants reported high self-efficacy but underestimated the number of zombie accounts they had.
We identify promising opportune moments — such as when updating account information or setting up a new device — and evaluate potential triggers, including breach notifications and data sensitivity. This work contributes an empirical characterization of end-users' diverse challenges related to zombie accounts and design recommendations for future deletion-support tools.
Decentralizing the governance of social computing systems to communities promises to empower them to make independent decisions, with nuance and in context. Yet, communities do not govern in isolation. Many problems communities face are common, or move across their boundaries. We propose designing for inter-community governance: mechanisms that support relationships between communities toward coordinating on governance issues. Drawing from workshops with 24 individuals on decentralized, community-run social media, we present six challenges in designing for inter-community governance surfaced through ideas discussed in workshops. These ideas come together as an ecosystem of resources and tools that highlight three key principles for design: modularity, forkability, and polycentricity. We end with a discussion of how workshop ideas might be implemented in future work aiming to support community governance in social computing more broadly.
Broadband infrastructure is often assumed to reduce informational disparities by expanding access to digital platforms. Yet less is understood about how broadband shapes participation in peer production communities, where knowledge is collectively created and maintained. Using spatial regression models, we examine how broadband coverage influences who contributes and how participation patterns shift in geo-tagged Wikipedia edits across U.S. counties. We find that broadband expansion is strongly associated with increased contributions from local casual and regular editors while reducing reliance on bot-driven activity. However, contributions remain highly concentrated, as prolific editors continue to dominate production. Moreover, we uncover spatial spillover effects, where broadband gains in one county decrease participation in neighboring areas, revealing competitive dynamics in peer production. These findings challenge the assumption that access alone fosters equity, showing that broadband reshapes but does not evenly redistribute editorial influence, with implications for infrastructure policy, platform design, and sustaining inclusive peer production.
Child sexual abuse material (CSAM) presents a critical challenge for online safety, yet the verification procedures that determine which items are classified as CSAM remain poorly understood. Triple verification (requiring three reviewers to agree) is promoted as a safeguard, but little is known about how it is implemented, how it is perceived by experts, and how voting conditions affect reliability. We address this gap through a mixed-methods study. We interviewed 14 experts from seven organizations (e.g., law enforcement, hotlines, etc.) to map current verification practices, then ran an inter-reliability experiment with Dutch National Police experts who reviewed 2,031 images and videos under different voting conditions (blind vs. non-blind, varied order). Finally, we held a focus group to explore the reasons behind disagreements. We find that practices vary widely, perceptions of triple verification reflect both safeguards and burdens, and expert agreement depends on voting conditions and content type.