Little research examines how ethnographers perceive Responsible AI. This paper investigates Indian ethnographers' knowledge, critiques, and envisioned roles through a qualitative study with 20 participants. Findings reveal knowledge heterogeneity, with most having indirect engagement through seminars while few demonstrated direct expertise through formal training. Drawing on field experiences, participants critique dominant Responsible AI frameworks as contextually misaligned with India's social realities, failing to address caste, class, and regional hierarchies. Through concrete examples, they demonstrate how helpfulness and harmlessness logics operate without power analysis or cultural grounding, such as welfare metrics missing household dynamics and benchmarks excluding marginalized languages. Participants advocate situated approaches co-created with affected communities, proposing methodological innovations including ethnographic metadata in model cards, field-conditioned evaluation, and interpretive roles in reinforcement learning for human feedback workflows.
ACM CHI Conference on Human Factors in Computing Systems