Health Equity and Underserved Populations

会議の名前
CHI 2026
"I Should Know, But I Dare Not Ask": From Understanding Challenges in Healthcare Journeys to Deriving Design Implications for North Korean Defectors' Adaptation
要旨

While it is known that North Korean defectors (NKDs) struggle with South Korea's healthcare system, the specific challenges of their patient journey remain underexplored. To investigate this, we conducted interviews with 10 NKDs about an 8-step patient journey and identified the clinical consultation step as a critical barrier for all participants, marked by three key challenges: expressing symptoms, managing social and cultural concerns, and overcoming language differences. In response, we developed Medibridge, a mobile prototype that allows users to rehearse with an AI doctor before a real hospital visit to generate a tangible ``Helper Note'' for their actual consultation. Our evaluation with 15 NKDs showed improvements in perceived communication capability, including greater expression clarity, reduced social and cultural concerns, and enhanced linguistic confidence. Our contributions include an empirical understanding of NKDs' healthcare challenges, a novel AI-powered rehearsal system that prepares users for real-world clinical communication, and design implications for inclusive technologies for displaced populations.

著者
Hyungwoo Song
Seoul National University, Seoul, Korea, Republic of
Jeongha Kim
Seoul National University, Seoul, Korea, Republic of
Minju Kim
Seoul National University, Seoul, Korea, Republic of
Duhyung Kwak
Seoul National University, Seoul, Korea, Republic of
Minjeong Shin
Seoul National University, Seoul, Korea, Republic of
Bongwon Suh
Seoul National University, Seoul, Korea, Republic of
Hyunggu Jung
Seoul National University, Seoul, Korea, Republic of
動画
Designing with Medical Mistrust: Perspectives from Black Older Adults in Publicly Subsidized Housing
要旨

Despite increasing interest in culturally-sensitive health technologies, medical mistrust remains largely unexplored within human-centered computing. Considered a social determinant of health, medical mistrust is the belief that healthcare providers or institutions are acting against one's best interest. This is a rational, protective response based on historical context, structural inequities, and discrimination. To center race-based medical mistrust and the lived experiences of Black older adults with low income, we conducted interviews within publicly subsidized housing in the Southern United States. Our reflexive themes describe community perspectives on health care and medical mistrust, including accreditation and embodiment, skepticism of financial motivations, and the intentions behind health AI. We provide a reflective exercise for researchers to consider their positionality in relation to community engagements, and reframe our findings through Black Feminist Thought to propose design principles for health self-management technologies for communities with historically grounded medical mistrust.

著者
Cynthia M. Baseman
Georgia Institute of Technology, Atlanta, Georgia, United States
Reeda Shimaz Huda
Georgia Institute of Technology, Atlanta, Georgia, United States
Rosa I.. Arriaga
Georgia Institute of Technology, Atlanta, Georgia, United States
Designing Beyond Language: Sociotechnical Barriers in AI Health Technologies for Limited English Proficiency
要旨

Limited English proficiency (LEP) patients in the U.S. face systemic barriers to healthcare beyond language and interpreter access, encompassing procedural and institutional constraints. AI advances may support communication and care through on-demand translation and visit preparation, but also risk exacerbating existing inequalities. We conducted storyboard-driven interviews with 14 patient navigators to explore how AI could shape care experiences for Spanish-speaking LEP individuals. We identified tensions around linguistic and cultural misunderstandings, privacy concerns, and opportunities and risks for AI to augment care workflows. Participants highlighted structural factors that can undermine trust in AI systems, including sensitive information disclosure, unstable technology access, and low literacy. While AI tools can potentially alleviate social barriers and institutional constraints, there are risks of misinformation and reducing human-to-human interactions. Our findings contribute AI design considerations that support LEP patients and care teams via rapport-building, educational and language support, and minimizing disruptions to existing practices.

受賞
Honorable Mention
著者
Michelle Huang
University of Illinois Urbana-Champaign, Urbana, Illinois, United States
Violeta J. Rodríguez
University of Illinois Urbana Champaign, Champaign, Illinois, United States
Koustuv Saha
University of Illinois Urbana-Champaign, Urbana, Illinois, United States
Tal August
University of Illinois Urbana-Champaign , Urbana, Illinois, United States
Promise or Peril? Exploring Black Adults' Perspectives on the Use of Artificial Intelligence in Health Contexts
要旨

As artificial intelligence (AI) is rapidly integrated into healthcare, ensuring that this innovation helps to combat health inequities requires engaging marginalized communities in health AI futuring. However, little research has examined Black populations’ perspectives on the use of AI in health contexts, despite the widespread health inequities they experience–inequities that are already perpetuated by AI. Addressing this research gap, through qualitative workshops with 18 Black adults, we characterize participants’ cautious optimism for health AI addressing structural well-being barriers (e.g., by providing second opinions that introduce fairness into an unjust healthcare system), and their concerns that AI will worsen health inequities (e.g., through health AI biases they deemed inevitable and the problematic reality of having to trust healthcare providers to use AI equitably). We advance health AI research by articulating previously-unreported health AI perspectives from a population experiencing significant health inequities, and presenting key considerations for future work.

受賞
Best Paper
著者
Andrea G. Parker
Google Research, Atlanta, Georgia, United States
Laura M. Vardoulakis
Google Research, Mountain View, California, United States
Christina Harrington
Google Research , Atlanta, Georgia, United States
動画
Care-in-Retrograde: Designing for Reproductive Health in the Aftermath of Roe.
要旨

The overturn of Roe v. Wade radically changed abortion access within the United States leaving women to navigate new financial, legal, and logistical challenges in managing their reproductive health needs. Reporting on findings from co-design workshops with participants from Indiana (a state with an abortion ban) and New York (where abortion is accessible), we investigate how women envision care in response to ongoing legal and medical uncertainty. Drawing together techno-feminist scholarship on care and reproductive health, in this paper we highlight several "entangled" design stories of anxiety and fear in navigating diminished healthcare services, as well as resistance and hope. Our findings prompt critical reflections for HCI on the role of health technology amid a world in which reproductive health, and medicine at large, is often a site of political contestation and conflict. Care-in-Retrograde re-orients a techno-utopian and future-oriented view of health technology to consider design work amid healthcare trajectories of disruption and reversal.

著者
Cristina Bosco
Indiana University Bloomington, Bloomington, Indiana, United States
Ege Otenen
Indiana University, Bloomington, Indiana, United States
Patrick C.. Shih
Indiana University Bloomington, Bloomington, Indiana, United States
Elizabeth Kaziunas
Indiana University Bloomington, Bloomington, Indiana, United States
High Accuracy and Hidden Disparities: Investigating Foundation Model Performance in Clinical Cognitive Assessment
要旨

Foundation models tested for clinical practice using human-designed metrics may mask fundamental differences in information processing. We investigated this using the clock drawing test (CDT), a cognitive screening tool. Three foundation models achieved 94% accuracy on conventional metrics, matching experts. However, upon decomposing the CDT into 24 questions across five cognitive domains, results diverged significantly. In cases with unanimous model agreement, they still disagreed with human raters in 22% cases. Performance varied drastically with 88% alignment with humans on rule-based executive questions but only 46% on context-dependent anticipatory thinking questions. We observed that models abstained three times more than humans, primarily owing to poor data quality. These findings show standard clinical evaluation metrics fail to capture how foundation models process information. High aggregate accuracy obscures component-level failures. We contribute a systematic evaluation of frontier models' healthcare capabilities, demonstrate theory-driven task decomposition, and discuss design implications for better human-AI collaborative systems.

著者
Abhay Sheel Anand
University of Massachusetts Amherst, Amherst, Massachusetts, United States
Deepak Ganesan
University of Massachusetts, Amherst, Amherst, Massachusetts, United States
Ravi Karkar
University of Massachusetts Amherst, Amherst, Massachusetts, United States
From the Field to the Algorithm: Understanding Indian Ethnographers' Perspectives on Responsible AI
要旨

Little research examines how ethnographers perceive Responsible AI. This paper investigates Indian ethnographers' knowledge, critiques, and envisioned roles through a qualitative study with 20 participants. Findings reveal knowledge heterogeneity, with most having indirect engagement through seminars while few demonstrated direct expertise through formal training. Drawing on field experiences, participants critique dominant Responsible AI frameworks as contextually misaligned with India's social realities, failing to address caste, class, and regional hierarchies. Through concrete examples, they demonstrate how helpfulness and harmlessness logics operate without power analysis or cultural grounding, such as welfare metrics missing household dynamics and benchmarks excluding marginalized languages. Participants advocate situated approaches co-created with affected communities, proposing methodological innovations including ethnographic metadata in model cards, field-conditioned evaluation, and interpretive roles in reinforcement learning for human feedback workflows.

著者
Anasmita Ghoshal
Jadavpur University , Kolkata , India
Atmadeep Ghoshal
King's College London, London, United Kingdom