``It Hasn’t Lived in Our Society”: Investigating Cultural Sensitivity in LLM Chatbots for Emotional Support

要旨

Large Language Models (LLMs) offer potential benefits for increasing access to digital well-being support, yet their application raises important questions about risks and responsible implementation. This paper examines a critical, often overlooked, dimension of LLM safety: cultural and social alignment in underrepresented contexts. We investigate how LLM-mediated emotional support can be adapted for a specific cultural setting, using Saudi Arabia as a case study. We present CSESC, a Culturally Sensitive Emotional Support Chatbot, developed as a technology probe to explore user perceptions of culturally sensitive responses. Our adaptation process was grounded in emotional support frameworks and guided by multicultural guidelines and local expertise. User evaluations demonstrate that cultural alignment enhances users’ sense of relatedness, while also surfacing tensions between empathy and sociocultural norms. We discuss the notion of “minimum cultural alignment,” contributing to HCI literature on culturally responsive LLM design and broadening the understanding of LLM safety.

著者
Sarah Aldaweesh
University of Oxford, Oxford, United Kingdom
Ghzal Alelsheikh
Imam Mohammad Ibn Saud University, Riyadh, Saudi Arabia
Falwah Alhamed
Imperial College London, London, United Kingdom
Max Van Kleek
University of Oxford, Oxford, Oxfordshire, United Kingdom
Nigel R. Shadbolt
University of Oxford, Oxford, United Kingdom
動画

会議: CHI 2026

ACM CHI Conference on Human Factors in Computing Systems

セッション: Care, Disability, & Healthcare Technologies

P1 - Room 129
7 件の発表
2026-04-14 20:15:00
2026-04-14 21:45:00