Participatory design (PD) is readily applied in HCI to address complex sociotechnical challenges. However, PD faces ethical and practical concerns when it comes to scaling, i.e., extending its depth, scope or span. We extend the current understandings of scaling PD by applying feminist care ethics through Joan Tronto's framework of attentiveness, responsibility, competence, responsiveness and solidarity. Pursuing care when scaling participatory activities simultaneously requires acknowledging various backstage processes and the affective labour that typically remains invisible. Drawing from our experiences of diverse PD endeavours with migrant communities and a Finnish municipality, we use reflexive discussions to recognise 13 labours encountered while scaling our PD approaches with care in mind. We discuss how these introduce new costs and challenges and elaborate on their salience in PD work. Finally, we provide strategies for care-full scaling, which we define as primarily an affective, but also political process that requires continuous reflexivity.
Large Language Models (LLMs) offer potential benefits for increasing access to digital well-being support, yet their application raises important questions about risks and responsible implementation. This paper examines a critical, often overlooked, dimension of LLM safety: cultural and social alignment in underrepresented contexts. We investigate how LLM-mediated emotional support can be adapted for a specific cultural setting, using Saudi Arabia as a case study. We present CSESC, a Culturally Sensitive Emotional Support Chatbot, developed as a technology probe to explore user perceptions of culturally sensitive responses. Our adaptation process was grounded in emotional support frameworks and guided by multicultural guidelines and local expertise. User evaluations demonstrate that cultural alignment enhances users’ sense of relatedness, while also surfacing tensions between empathy and sociocultural norms. We discuss the notion of “minimum cultural alignment,” contributing to HCI literature on culturally responsive LLM design and broadening the understanding of LLM safety.
This study explores the phenomenon of energy and self-tracking technologies, moving beyond the context of managing chronic conditions. Our approach to designing from the experiences of people with disability is informed by crip theory, which challenges societal norms of health and ability. We analysed 50 survey responses and 15 interviews with wearable tracker users and found that self-tracking shapes interpretation of energy and self-care strategies. Our findings indicate that tracking significantly affect perceptions and judgments of bodily activity, energy and rest. We found a notable disconnect between the metrics provided by the trackers and the subjective understanding of personal energy meanings, especially during events of bodily and contextual changes such as travelling, illness, or menstrual cycle. This research contributes to discourses on energy in self-tracking technologies and advocates for designing more inclusive, crip futures for everyone that celebrate irregularity, fluctuation, and change, accommodating diverse bodily rhythms in energy tracking practices.
Robot-assisted feeding enables people with disabilities who require assistance eating to enjoy a meal independently and with dignity. However, existing systems have only been tested in-lab or in-home, leaving in-the-wild social dining contexts (e.g., restaurants) largely unexplored. Designing a robot for such contexts presents unique challenges, such as dynamic and unsupervised dining environments that a robot needs to account for and respond to. Through speculative participatory design with people with disabilities, supported by semi-structured interviews and a custom AI-based visual storyboarding tool, we uncovered ideal scenarios for in-the-wild social dining. Our key insight suggests that such systems should: embody the principles of a white glove service where the robot (1) supports multimodal inputs and unobtrusive outputs; (2) has contextually sensitive social behavior and prioritizes the user; (3) has expanded roles beyond feeding; (4) adapts to other relationships at the dining table. Our work has implications for in-the-wild and group contexts of robot-assisted feeding.
Co-design is essential for grounding embodied artificial intelligence (AI) systems in real-world contexts, especially high-stakes domains such as healthcare. While prior work has explored multidisciplinary collaboration, iterative prototyping, and support for non-technical participants, few have interwoven these into a sustained co-design process. Such efforts often target one context and low-fidelity stages, limiting the generalizability of findings and obscuring how participants' ideas evolve. To address these limitations, we conducted a 14-week workshop with a multidisciplinary team of 22 participants, centered around how embodied AI can reduce non-value-added task burdens in three healthcare settings: emergency departments, rehabilitation facilities, and sleep disorder clinics. We found that the iterative progression from abstract brainstorming to high-fidelity prototypes, supported by educational scaffolds, enabled participants to understand real-world trade-offs and generate more deployable solutions. We propose eight guidelines for co-designing more considerate embodied AI: attuned to context, responsive to social dynamics, mindful of expectations, and grounded in deployment.
Mental health applications offer accessible alternatives to traditional care, with many now integrating AI features like chatbots and personalized recommendations. However, little is known about how AI is actually implemented or how users experience these features. Our study examines both developer positioning and user perceptions of AI in mental health applications. We systematically analyzed 244 mental health apps from the Apple App Store, identifying 12 distinct AI roles (e.g., coach, tracker, companion) and four interface types. We then conducted thematic and sentiment analysis of 996 user reviews from 27 AI-enabled apps to understand user experiences. Our analysis revealed recurring tensions around AI replacing human roles, trust, and augmentation. Our findings contribute a structured understanding of AI’s current roles in digital mental health and offer design recommendations for more effective and empathetic implementation.
Chronic illnesses (CI) are increasing worldwide, positioning virtual assistants (VAs) as valuable tools for supporting patients in self-management. As effective self-management relies on holistic, patient-centered practices, AI is increasingly integrated into VAs to provide more personalized support. Yet, it is essential that VA design processes remain grounded in participatory approaches prioritizing patients’ values, needs, and lived experiences. To assess the current state of VA design processes, we conducted a scoping review of 55 papers examining how care is framed and patients are involved. Our findings reveal AI-driven VAs prioritize reductionist approaches over holistic care with minimal patient involvement. This highlights a gap between the potential of patient-centered care technology and current implementation practices. Our contributions include (1) a mapping of care dimensions currently implemented in VAs, (2) a categorization of patient roles in the design process, and (3) design implications to expand care dimensions and patient involvement in AI-driven VAs.