During the 2023–ongoing Gaza war, Palestinian advocacy on social media has faced rapid removals, downranking, and account sanctions. In this contribution, we offer a layered analysis of how people endure and counter this repression across affective, mechanistic, and material dimensions. Using patchwork ethnography over 295 first-person testimonies and 85 NGO/press documents, we identify a recursive Contest Loop: hostile mass-report brigades and automated enforcement that spur supporter ``appeal brigades,'' mirroring, and migration. Findings are organized as a three-layer ecology---Invisible Scars (whiplash, shadowbanning as probabilistic throttling, self-censorship), Dueling Brigades (frictions, coordinated reports, supporter procedures), and Feed-to-Street Ripples (fundraising, evidentiary preservation, livelihoods). Conceptually, we extend platform-assemblage thinking with a Resistance Assemblage: ad-hoc technical, emotional, and legal mutual-aid infrastructures that keep visibility alive under sanction. We contribute: (1) an event-centered, experience-near account of co-produced moderation in conflict; (2) two integrative lenses (Contest Loop, Resistance Assemblage); and (3) design/policy directions, including collective-appeal dashboards, and evidentiary safeguards that separate archiving from distribution.
Design equity toolkits are increasingly being invoked to address the ethical and political consequences of technology design, yet they are criticized for being either too generic or too narrow to address the complex realities of equity in design. To examine the intended purpose of these toolkits from creators' perspectives and explore how designers envision using them in practice, we conducted a two-phase study: interviews with toolkit creators and a walkthrough demonstration workshop with early-career UX designers.
Our findings highlight divergent values around toolkit functionality: while creators emphasize flexibility and reflection, early-career designers express a need for actionable pathways to help mediate design equity work within corporate hierarchies. We show how toolkits act as supports for articulation work in design equity, their role as boundary objects for values translation, and conclude by framing how design equity toolkits can be re-conceptualized as legitimacy-building artefacts with capacites to help early-designers advocate for more equitable futures.
Participatory design is increasingly used to address the negative social impacts of artificial intelligence (AI), aiming for more inclusive and equitable innovation. However, it can inadvertently reproduce injustice and reinforce power imbalances, even with good intentions. While the HCI community is critical of these issues, it remains challenging for AI researchers and policy-makers to act upon these critiques. This paper presents a scoping review of Participatory AI research in HCI discussed through the lens of design justice. The goal is to provide a richer understanding of how current PAI work engages with justice and what the stakes and barriers are to putting justice principles in action. We conclude with raising methodological questions on the roles of researchers and partnership with communities, and the essential but instrumental role of artefacts in supporting knowledge production and social change. The work contributes to a holistic understanding of the current takes and stakes of Participatory AI in critical human-computer interaction research.
Online power-asymmetric conflicts are prevalent, and most platforms rely on human moderators to conduct moderation currently. Previous studies have been continuously focusing on investigating human moderation biases in different scenarios, while moderation biases under power-asymmetric conflicts remain unexplored. Therefore, we aim to investigate the types of power-related biases human moderators exhibit in power-asymmetric conflict moderation (RQ1) and further explore the influence of AI's suggestions on these biases (RQ2). For this goal, we conducted a mixed design experiment with 50 participants by leveraging the real conflicts between consumers and merchants as a scenario. Results suggest several biases towards supporting the powerful party within these two moderation modes. AI assistance alleviates most biases of human moderation, but also amplifies a few. Based on these results, we propose several insights into future research on human moderation and human-AI collaborative moderation systems for power-asymmetric conflicts.
Despite experiencing extensive losses in telecommunication infrastructure since October 2023, Gazans have managed to communicate with the outside world. How have they accomplished this? Through semi-structured interviews with 18 Gazan residents, this study examines how Gazans have perceived various interruptions and losses of electronic communication, how they responded and worked around communication limits, and why they persisted in communicating outside of Gaza. Our findings confirm previous results about communication under state-imposed telecom shutdowns, and also contribute new knowledge, given Gaza’s distinctive political and technological dynamic. We find that restrictions drove participants– who felt compelled to maintain contact – to perpetual technical improvisation, often toward pre-digital tools, varying by geography, available technology, and electrical power. Creative, subaltern networks such as Bluetooth meshes and street internet disrupted severe repression. Our participants discussed such activities as a response to the larger context of violence, and we conceptualize
their efforts as "digital infrastructural resistance."
Compensation in HCI research is often the primary ethical interface between HCI researchers and low-income communities. Yet, prevailing models of compensation can perpetuate neocolonial extraction and frame participation as transactional labor. This practice risks creating dependency and obscuring power imbalances, ultimately compromising both research integrity and participant dignity. Drawing on the experiences of researchers working in Africa and the southern African philosophy of Ubuntu, this paper employs a decolonial lens to critique the research economy of participation and compensation. We propose a framework for relational compensation, which re-imagines compensation not as payment for data but as a form of restorative justice and relational accountability. Through analytic vignettes, we examine tensions around community-researcher interdependency, gendered care burdens, and community solidarity. We conclude with principles for relational research economies that prioritize communal benefit, long-term data sovereignty, and co-designed terms of engagement, offering HCI a path toward reciprocal praxis.
The increasing digitalization of society has intensified the importance of secure and effective data management. For human rights defender organizations, these demands are complicated by scarce resources and risks of surveillance and online harassment. Simultaneously, regulatory frameworks such as the GDPR shape how these organizations are required to handle data. This paper examines how human rights defender organizations in Sweden navigate data practices, focusing on their strategies, challenges, and the effects of legal requirements. Drawing on critical data literacy and data feminist perspectives, we conceptualize data literacy as the ability to interpret and act on data in relation to its social and political effects. We show that limited resources significantly constrain organizations’ ability to adopt robust data practices. Nonetheless, data remains crucial for their advocacy and support of marginalized communities. We contribute to HCI by showing how human rights defender organizations develop situated, feminist forms of critical data literacy that challenge dominant assumptions about security, compliance, and good data practice.