Large Language Models (LLMs) offer potential benefits for increasing access to digital well-being support, yet their application raises important questions about risks and responsible implementation. This paper examines a critical, often overlooked, dimension of LLM safety: cultural and social alignment in underrepresented contexts. We investigate how LLM-mediated emotional support can be adapted for a specific cultural setting, using Saudi Arabia as a case study. We present CSESC, a Culturally Sensitive Emotional Support Chatbot, developed as a technology probe to explore user perceptions of culturally sensitive responses. Our adaptation process was grounded in emotional support frameworks and guided by multicultural guidelines and local expertise. User evaluations demonstrate that cultural alignment enhances users’ sense of relatedness, while also surfacing tensions between empathy and sociocultural norms. We discuss the notion of “minimum cultural alignment,” contributing to HCI literature on culturally responsive LLM design and broadening the understanding of LLM safety.
ACM CHI Conference on Human Factors in Computing Systems