AI governance frameworks are predominantly shaped by Western, secular values, which risks excluding the perspectives of 1.9 billion Muslims whose ethical reasoning is grounded in Islamic principles. To address this gap, we conducted co-design workshops with 12 Muslim women, to identify Islamic ethical values for AI systems and examine how these compare to the \textit{UK AI White Paper}. Our analysis revealed four themes that complement and challenge assumptions in secular AI governance, (1) Honesty, Transparency, and Trustworthiness; (2) Knowledge, Responsibility, and Divine Accountability; (3) Justice, Power, and Equity; and (4) Unity, Inclusivity, and Diversity. Our findings present Islamic ethical AI processes, including Hadith authentication for verification, collective consultation (shūrā), and wealth distribution (zakat) for restorative and re-distributive justice. This paper contributes to calls for decolonial AI governance and offers the HCI community an Islamic understanding of AI ethics to broaden debates beyond Western paradigms.
As misinformation proliferates online, large language models (LLMs) have been proposed as a promising tool to accelerate fact-checking workflows. While LLMs demonstrate strong performance in tasks such as text annotation, their capabilities in generating fact-checking reports remain uncertain. To investigate how media experts evaluate LLM-generated fact-checking reports, we conducted a 2 (Source: human vs. LLM) X 2 (Disclosure of Source: yes or no) between-subjects online experiment with media professionals (N=274). Our analyses reveal that experts perceive LLM-generated reports as significantly less useful than human-written reports; and such differences become larger when participants are not aware of the source. However, LLM-generated fact-checking reports were rated as accurate and logical as human-authored ones. Party affiliation plays a role in predicting perceived logicalness. Our findings advance the understanding of experts' evaluation of LLM-generated content within the context of misinformation, which provides important theoretical contributions to HCI and communication theories as well as practical implications for the field.
Git is widely used for collaborative software development, but it can be challenging for newcomers. While most learning tools focus on individual workflows, Git is inherently collaborative. We present GitAcademy, a browser-based learning platform that embeds a full Git environment with a split-view collaborative mode: learners work on their own local repositories connected to a shared remote repository, while simultaneously seeing their partner's actions mirrored in real time. This design is not intended for everyday software development, but rather as a training simulator to build awareness of distributed states, coordination, and collaborative troubleshooting. In a within-subjects study with 13 pairs of learners, we found that the split-view interface enhanced social presence, supported peer teaching, and was consistently preferred over a single-view baseline, even though performance gains were mixed. We further discuss how split-view awareness can serve as a training-only scaffold for collaborative learning of Git and other distributed technical systems.
AI governance efforts increasingly rely on audit standards: agreed-upon practices for conducting audits. However, poorly designed standards can hide and lend credibility to inadequate systems. We explore how an audit standard’s design influences its effectiveness through a case study of ASB 018, a standard for auditing probabilistic genotyping software---software that the U.S. criminal legal system increasingly uses to analyze DNA samples. Through qualitative analysis of ASB 018 and five audit reports, we identify numerous gaps between the standard's desired outcomes and the auditing practices it enables. For instance, ASB 018 envisions that compliant audits establish restrictions on software use based on observed failures. However, audits can comply without establishing such boundaries. We connect these gaps to the design of the standard’s requirements such as vague language and undefined terms. We conclude with recommendations for designing audit standards and evaluating their effectiveness.
Social media platforms and their governance policies often fail marginalized users in high-stakes contexts, including war, violent attacks, human rights violations, humanitarian crises, and situations of systemic oppression. Through interviews, autoethnography, and digital ethnography, this paper presents three case studies from Venezuela, Nigeria, and the United States to examine how marginalized populations engage with social media in non-normative ways. We analyze how platform design and policies intersect with participants’ identities, marginalization, and labor. Our central finding is that users’ urgent infrastructural and contextual needs are often overlooked, revealing structural flaws in social media design that mimic physical-world power asymmetries. In response, users develop innovative workarounds, engage in self-censorship, and adopt coping strategies, undertaking additional, often invisible, sociotechnical repair work that reinforces their precarity. To address these complex needs, we urge social media companies to collaborate with marginalized users to integrate alternative infrastructural features, such as emergency response tools and exit mechanisms for well-being.
As data becomes integral to civic processes and resource distribution, there is a need for methods in which communities generate, interpret, and act on data to address their priorities. We introduce ROOTED (Reclaiming and Organizing Our Truths for Equity through Data), a community-centered framework grounded in Black Feminist Thought. By cultivating community data practices, ROOTED helps residents leverage their local insights, lived experiences, and data to pursue equitable outcomes by using data as a tool for advocacy, organizing, and local transformation. Through two case studies, we demonstrate how researchers and communities can collaboratively implement ROOTED. Our findings suggest that residents use data to build power and relationships to collectively achieve their goals. This paper contributes a framework and case study examples that demonstrate how to design community data systems and practices that produce actionable outcomes aligned with residents’ visions for their futures.
The restaurant industry has become increasingly reliant on digital technologies for business operations, digital marketing, and promotion, especially amid and after the Covid-19 pandemic. This paper presents the findings of a two-year study exploring how women- and minority-owned restaurants in Chicago and Detroit encountered and overcame digital challenges in their day-to-day operations, across a range of levels of digital skills and literacy. Drawing from semi-structured and impromptu interviews with restaurant owners (n=47) and participant observation, we apply HCI literature on infrastructuring and patchworking to highlight how restaurateurs' experiences often run counter to the assumptions of a "typical" user. Indeed, they often must build and leverage their—offline and online—networks of support to overcome failing infrastructures, both within the restaurant industry and on digital platforms. Concurrently, we emphasize the importance of community building and social infrastructuring to overcome these challenges while also building up alternative networks of resources for their communities, especially considering identity-related inequalities and amid a global moment of crisis.