Framing Responsible Design of AI for Mental Well-Being: AI as Primary Care, Nutritional Supplement, or Yoga Instructor?

要旨

Millions of people now use non-clinical Large Language Model (LLM) tools like ChatGPT for mental well-being support. This paper investigates what it means to design such tools responsibly, and how to operationalize that responsibility in their design and evaluation. By interviewing experts and analyzing related regulations, we found that designing an LLM tool responsibly involves: (1) Articulating the specific benefits it guarantees and for whom. Does it guarantee specific, proven relief, like an over-the-counter drug, or offer minimal guarantees, like a nutritional supplement? (2) Specifying the LLM tool's ``\textit{active ingredients}'' for improving well-being and whether it guarantees their effective delivery (like a primary care provider) or not (like a yoga instructor). These specifications outline an LLM tool's pertinent risks, appropriate evaluation metrics, and the respective responsibilities of LLM developers, tool designers, and users. These analogies—\textit{LLM tools as supplements, drugs, yoga instructors, and primary care providers}—can scaffold further conversations about their responsible design.

著者
Ned Cooper
Cornell University, Ithaca, New York, United States
Jose A.. Guridi
Cornell University, Ithaca, New York, United States
Angel Hsing-Chi Hwang
University of Southern California, Los Angeles, California, United States
Beth Kolko
University of Washington, Seattle, Washington, United States
Emma Elizabeth McGinty
Weill Cornell Medical College, New York, New York, United States
Qian Yang
Cornell University, Ithaca, New York, United States

会議: CHI 2026

ACM CHI Conference on Human Factors in Computing Systems

セッション: Stress Management and Emotional Regulation

P1 - Room 124
7 件の発表
2026-04-16 20:15:00
2026-04-16 21:45:00