AI in Family, Dating and Private Life

会議の名前
CHI 2026
The Hidden Load: Parenting Young Children While Leading in Critical Professions
要旨

Parenting while serving as a frontline leader is uniquely stressful, yet little is known about how family responsibilities shape physiological stress in these roles. We followed emergency physicians and tactical police leaders, comparing parents of young children with non-parents across four days: one critical mission day, two standard workdays, and one non-workday. Using wearable sensing, expert activity labeling, and daily debriefs, we inferred stress only in sedentary epochs via a normalized-heart-rate method, with an HRV-based index as benchmark. Parents showed higher stress on workdays and non-workdays, but not on critical mission days, where attentional narrowing and strict device policies appear to suppress parenting-related differences. We contribute: (i) in-the-wild physiological evidence that parenthood amplifies stress mainly under permeable boundaries, (ii) a pragmatic stress-labeling pipeline for safety-critical settings, (iii) a configuration-based account linking boundaries, attention, and parenting, and (iv) design implications for stress-aware boundary management systems, supported by an open analysis repository.

著者
Corinna Rott
University of Maastricht, Maastricht, Limburg, Netherlands
Fettah Kiran
University of Houston, Houston, Texas, United States
Malgorzata W.. Kozusznik
Ghent University, Ghent, Belgium
Mien Segers
University of Maastricht, Maastricht, Netherlands
Piet Van den Bossche
University of Antwerp, Antwerp, Belgium
Ergun Akleman
Texas A&M University, College Station, Texas, United States
Ioannis Pavlidis
University of Houston, Houston, Texas, United States
How Well do LLMs Assist Parents in Assessing Child Appropriateness of Videos?
要旨

Children’s entertainment has become increasingly digital, with much of it available on video-sharing platforms. Although traditional media such as movies and TV are manually curated for appropriateness, the sheer quantity of videos being uploaded online makes this approach impractical. Current automated techniques fail to capture the diversity in parental supervision caused by varying parental preferences, culture, and other factors, while also lacking the transparency and explainability necessary to build parental trust. This study seeks to evaluate LLM's ability to assess the appropriateness of videos for children under the age of 7 in an explainable manner and its overall alignment with parental values. Our study shows that while LLMs are less effective at determining appropriateness themselves, they can provide beneficial descriptions of the videos and effectively aid in the parental decision-making process.

著者
Sabila Nawshin
Indiana University Bloomington, Bloomington, Indiana, United States
Ashley Phoebe. Ishoel
Indiana Unviersity, Bloomington, Indiana, United States
Arun Balaji Buduru
IIIT-Delhi, Delhi, Delhi, India
Apu Kapadia
Indiana University, Bloomington, Indiana, United States
Understanding Parents’ Desires in Moderating Children’s Interactions with GenAI Chatbots through LLM-Generated Probes
要旨

This paper studies how parents want to moderate children’s interactions with Generative AI Chatbots, with the goal of informing the design of future GenAI parental control tools. We first used an LLM to generate synthetic Child--GenAI Chatbot interaction scenarios and worked with four parents to validate their realism. From this dataset, we carefully selected 12 diverse examples that evoked varying levels of concern and were rated the most realistic. Each example included a prompt and GenAI Chatbot response. We presented these to parents (N=24) and asked whether they found them concerning, why, and how they would prefer to modify the responses and be informed. Our findings reveal three key insights: (1) parents express concern about interactions that current GenAI Chatbot parental controls neglect; (2) parents want fine-grained transparency and moderation at the conversation level; and (3) parents need personalized controls that adapt to their desired strategies and children's ages.

著者
John Driscoll
University of California San Diego, La Jolla, California, United States
Yulin Chen
University of California San Diego, La Jolla, California, United States
Viki Shi
University of California San Diego, La Jolla, California, United States
Izak Vucharatavintara
San Diego State University, San Diego, California, United States
Yaxing Yao
Johns Hopkins University , Baltimore, Maryland, United States
Haojian Jin
University of California San Diego, La Jolla, California, United States
“I Wanted Them to Think That I Wrote That”: AI-Generated Self-Presentation on Dating Apps and Implications of Non-Disclosure on Informed Consent
要旨

Generative artificial intelligence (AI) adds unprecedented scale to capabilities for self-presentation online that may diverge from one’s physical-world identity, thus potentially misinforming consent to intimate interactions, such as in online dating. Yet there is little empirical understanding of AI-generated self-presentation and (non-)disclosure to interaction partners. We present a qualitative survey of 113 online daters who used AI-generated content in their profiles or messages seen by in-person meeting partners. Findings show that generative AI is often used to fabricate attractive dating personalities through profile text and bios, with no relevance to one’s actual identity, and is seldom disclosed to meeting partners to avoid romantic rejection. Because sexual assault is defined by mis- or under-informed consent, the study positions generative AI as a potentially significant sexual assault risk factor through its use for presentation of non-physical traits that are influential to dating outcomes yet not readily identified as AI-generated upon meeting face-to-face. Content warning: this paper discusses forms of sexual violence including rape by deception.

著者
Meryem Barkallah
University of Michigan-Flint, Flint, Michigan, United States
Douglas Zytko
University of Michigan-Flint, Flint, Michigan, United States
The Algorithmic Mirror: Knowledge Creation and Self-Perception in Dating Applications
要旨

Algorithmic dating applications mediate romance through an "algorithmic mirror," subjecting users to data-driven classifications that shape their self-perception. However, the specific strategies users employ to interpret and strategically manage this reflection remain underexplored. Understanding this dynamic is critical, as navigating the algorithmic gaze demands significant emotional labor and has profound implications for user agency and well-being. Through semi-structured interviews with 15 OkCupid users, I investigated this process of sense-making. I contribute a novel typology of three knowledge forms, Folk, Personal, and Academic, that users construct to redefine themselves against the algorithm. Theoretically, this paper frames the "algorithmic other" as a statistical counterpart to Mead's "generalized other," revealing a core "dual-audience dilemma" where users perform for both humans and machines. These findings inform the design of more transparent and contestable systems that better support user agency.

著者
Nadav Viduchinsky
Bar-Ilan University, Ramat-Gan, Israel
"Chat, Should I Leave Him?" Risks, Rewards, and Roles for AI in Relationship Advice
要旨

As more people turn to chatbots for socioemotional support—often termed psychosocial AI—the stakes of understanding these interactions grow. Psychosocial AI might foster healthier human-human relationships—and also might exacerbate loneliness, abuse, and self-harm. We provide an empirical account of one less-studied facet: seeking AI advice on sex, dating, and relationships with other people. We recruited 25 people who use AI for relationship advice to a questionnaire, collecting 90 prompts illustrating their practices. Interviews with 17 further explored how they navigate AI’s limitations to achieve intimacy goals. Our findings detail (1) the roles that users imagine for AI in relationship advice; (2) how users navigate risks like sycophancy and overreliance to attain relational benefits; and (3) the folk theories users hold and the prompting tactics they employ to overcome AI’s limitations. We close with recommendations for human-AI interaction, AI safety, and sociotechnical research, towards AI that supports healthier digital intimacies.

著者
Emily Tseng
Microsoft Research, New York, New York, United States
Calvin A. Liang
Northwestern University, Chicago, Illinois, United States