The Hidden Load: Parenting Young Children While Leading in Critical Professions
説明

Parenting while serving as a frontline leader is uniquely stressful, yet little is known about how family responsibilities shape physiological stress in these roles. We followed emergency physicians and tactical police leaders, comparing parents of young children with non-parents across four days: one critical mission day, two standard workdays, and one non-workday. Using wearable sensing, expert activity labeling, and daily debriefs, we inferred stress only in sedentary epochs via a normalized-heart-rate method, with an HRV-based index as benchmark. Parents showed higher stress on workdays and non-workdays, but not on critical mission days, where attentional narrowing and strict device policies appear to suppress parenting-related differences. We contribute: (i) in-the-wild physiological evidence that parenthood amplifies stress mainly under permeable boundaries, (ii) a pragmatic stress-labeling pipeline for safety-critical settings, (iii) a configuration-based account linking boundaries, attention, and parenting, and (iv) design implications for stress-aware boundary management systems, supported by an open analysis repository.

日本語まとめ
読み込み中…
読み込み中…
How Well do LLMs Assist Parents in Assessing Child Appropriateness of Videos?
説明

Children’s entertainment has become increasingly digital, with much of it available on video-sharing platforms. Although traditional media such as movies and TV are manually curated for appropriateness, the sheer quantity of videos being uploaded online makes this approach impractical. Current automated techniques fail to capture the diversity in parental supervision caused by varying parental preferences, culture, and other factors, while also lacking the transparency and explainability necessary to build parental trust. This study seeks to evaluate LLM's ability to assess the appropriateness of videos for children under the age of 7 in an explainable manner and its overall alignment with parental values. Our study shows that while LLMs are less effective at determining appropriateness themselves, they can provide beneficial descriptions of the videos and effectively aid in the parental decision-making process.

日本語まとめ
読み込み中…
読み込み中…
Understanding Parents’ Desires in Moderating Children’s Interactions with GenAI Chatbots through LLM-Generated Probes
説明

This paper studies how parents want to moderate children’s interactions with Generative AI Chatbots, with the goal of informing the design of future GenAI parental control tools. We first used an LLM to generate synthetic Child--GenAI Chatbot interaction scenarios and worked with four parents to validate their realism. From this dataset, we carefully selected 12 diverse examples that evoked varying levels of concern and were rated the most realistic. Each example included a prompt and GenAI Chatbot response. We presented these to parents (N=24) and asked whether they found them concerning, why, and how they would prefer to modify the responses and be informed. Our findings reveal three key insights: (1) parents express concern about interactions that current GenAI Chatbot parental controls neglect; (2) parents want fine-grained transparency and moderation at the conversation level; and (3) parents need personalized controls that adapt to their desired strategies and children's ages.

日本語まとめ
読み込み中…
読み込み中…
“I Wanted Them to Think That I Wrote That”: AI-Generated Self-Presentation on Dating Apps and Implications of Non-Disclosure on Informed Consent
説明

Generative artificial intelligence (AI) adds unprecedented scale to capabilities for self-presentation online that may diverge from one’s physical-world identity, thus potentially misinforming consent to intimate interactions, such as in online dating. Yet there is little empirical understanding of AI-generated self-presentation and (non-)disclosure to interaction partners. We present a qualitative survey of 113 online daters who used AI-generated content in their profiles or messages seen by in-person meeting partners. Findings show that generative AI is often used to fabricate attractive dating personalities through profile text and bios, with no relevance to one’s actual identity, and is seldom disclosed to meeting partners to avoid romantic rejection. Because sexual assault is defined by mis- or under-informed consent, the study positions generative AI as a potentially significant sexual assault risk factor through its use for presentation of non-physical traits that are influential to dating outcomes yet not readily identified as AI-generated upon meeting face-to-face. Content warning: this paper discusses forms of sexual violence including rape by deception.

日本語まとめ
読み込み中…
読み込み中…
The Algorithmic Mirror: Knowledge Creation and Self-Perception in Dating Applications
説明

Algorithmic dating applications mediate romance through an "algorithmic mirror," subjecting users to data-driven classifications that shape their self-perception. However, the specific strategies users employ to interpret and strategically manage this reflection remain underexplored. Understanding this dynamic is critical, as navigating the algorithmic gaze demands significant emotional labor and has profound implications for user agency and well-being. Through semi-structured interviews with 15 OkCupid users, I investigated this process of sense-making. I contribute a novel typology of three knowledge forms, Folk, Personal, and Academic, that users construct to redefine themselves against the algorithm. Theoretically, this paper frames the "algorithmic other" as a statistical counterpart to Mead's "generalized other," revealing a core "dual-audience dilemma" where users perform for both humans and machines. These findings inform the design of more transparent and contestable systems that better support user agency.

日本語まとめ
読み込み中…
読み込み中…
"Chat, Should I Leave Him?" Risks, Rewards, and Roles for AI in Relationship Advice
説明

As more people turn to chatbots for socioemotional support—often termed psychosocial AI—the stakes of understanding these interactions grow. Psychosocial AI might foster healthier human-human relationships—and also might exacerbate loneliness, abuse, and self-harm. We provide an empirical account of one less-studied facet: seeking AI advice on sex, dating, and relationships with other people. We recruited 25 people who use AI for relationship advice to a questionnaire, collecting 90 prompts illustrating their practices. Interviews with 17 further explored how they navigate AI’s limitations to achieve intimacy goals. Our findings detail (1) the roles that users imagine for AI in relationship advice; (2) how users navigate risks like sycophancy and overreliance to attain relational benefits; and (3) the folk theories users hold and the prompting tactics they employ to overcome AI’s limitations. We close with recommendations for human-AI interaction, AI safety, and sociotechnical research, towards AI that supports healthier digital intimacies.

日本語まとめ
読み込み中…
読み込み中…