As artificial intelligence continues to advance, marginalized communities, particularly in Africa, remain limited in their ability to shape what AI should do, how it should be built, and how it might benefit them. This study adopted a speculative co-design approach with participants from Ocean View, a low-income community in South Africa. The aim was to elicit and facilitate collective visions to reimagine the future of AI and explore ways to make AI technologies more culturally relevant. Our findings reveal participants’ perceptions about AI, which informed a collective vision of AI designs that embed the community's local language and culture as well as services aimed at improving the community’s economic opportunities. Based on these insights, we identified directions for ethical AI design for marginalized communities that recognise and preserve cultural identity, needs for affordable AI designs, and the potential of AI for their socio-economic advancement as trajectories within AI research.
Millions of users across the globe turn to AI chatbots for their creative needs, inviting widespread interest in understanding how they represent diverse cultures. However, evaluating cultural representations in open-ended tasks remains challenging and underexplored. In this work, we present TALES, an evaluation of cultural misrepresentations in LLM-generated stories for diverse Indian cultural identities. First, we develop TALES-Tax, a taxonomy of cultural misrepresentations by collating insights from participants with lived experiences in India through focus groups (N=9) and individual surveys (N=15). Using TALES-Tax, we evaluate 6 models through a large-scale annotation study spanning 2,925 annotations from 108 annotators with lived experience and native language proficiency from across 71 regions in India and 14 languages. Concerningly, we find that 88% of the generated stories contain misrepresentations, and such errors are more prevalent in mid- and low-resourced languages and stories based in peri-urban regions in India. We also transform the annotations into TALES-QA, a standalone question bank to evaluate the cultural knowledge of models.
Building a sense of home is vital to refugees’ well-being and integration during resettlement. While prior HCI research has highlighted the significance of cultural identity and family bonds in shaping this sense of home, few studies have explored how digital technologies can actively support these dimensions. Informed by ambiguous loss theory, this paper presents a study of Culture Link, a collaborative AI image-generation platform with playful features, designed to foster storytelling and engagement among refugee families in Australia. Seven families participated in a ten-day field trial, where they co-created visual stories by sharing images and narratives. Through thematic analysis of interviews and in-app activities, we identified five aspects of how the platform facilitated the sense of home: memory preservation and re-interpretation, cultural transmission, creativity and playfulness, family engagement, and generational identity divergence. Our findings extend ambiguous loss theory in displacement contexts and contribute to HCI research by demonstrating generative AI as a facilitator for storytelling.
Data-driven policymaking has become central in public administration, leveraging datasets to optimize resource allocation and service delivery. Yet this trend raises critical questions about equity, representation, and the inclusion of marginalized communities in data governance. This paper examines the intersection of bureaucratic frameworks, data systems, and community needs, with a focus on disadvantaged groups. Drawing on a nationally representative survey (N = 754) and computational text analysis, we show that low-income respondents and residents of disadvantaged communities are more skeptical of data reliability and transparency, and place greater emphasis on community voice and ethical safeguards than their more advantaged counterparts. Our contribution lies in integrating intersectionality and place-based justice with HCI theories of data governance. We conclude with design recommendations for civic technologies and participatory data infrastructures that create accessible platforms, embed feedback loops, and support co-governance models fostering transparency, trust, and accountability.
Generative AI is rapidly diffusing worldwide, yet access remains uneven. In informal settlements, barriers of cost, literacy and connectivity can exclude residents from AI-enabled self-expression. This paper presents Street Scenes: a public appliance for walk-up interaction with generative AI video in Dharavi, Mumbai. Inspired by the “Hole in the Wall” computers and previous Dharavi speech deployments, the system lets passers-by capture phone images, add voice-, button- and dial-based prompts, and generate short videos to view and leave locally. We report on ideation workshops, two Wizard-of-Oz prototypes and a 13-day in-situ deployment across Dharavi street locations. Findings show residents appropriating AI for play, self-presentation, small business promotion and community messaging, while also raising concerns about privacy, trust and misuse. We contribute: (1) a model for public AI appliances; (2) empirical insights into community engagement with generative AI; and, (3) design lessons for accessible, equitable and community-governed AI systems.
AI technologies are increasingly deployed in high-stakes domains such as education, healthcare, law, and agriculture to address complex challenges in non-Western contexts. This paper examines eight real-world deployments spanning seven countries and 18 languages, combining 17 interviews with AI developers and domain experts with secondary research. Our findings identify six cross-cutting factors — Language, Institution, Safety, Task, End-User Demography, and Domain — that structured how systems were designed and deployed. These factors were shaped by Sociocultural (diversity, practices), Institutional (resources, policies), and Technological (capabilities, limits) influences. We find that building effective AI systems required extensive collaboration between AI developers and domain experts, with human resources proving more critical to achieving safe and effective outcomes in high-stakes domains than technological expertise alone. Additionally, we present 12 guidelines synthesizing these dynamics for designing AI for social good systems that are culturally grounded, equitable, and responsive to the needs of non-Western contexts.