Men experiencing infertility face unique challenges navigating Traditional Masculinity Ideologies that discourage emotional expression and help-seeking. This study examines how Reddit's r/maleinfertility community helps overcome these barriers through digital support networks. Using topic modeling (115 topics), network analysis (11 micro-communities), and time-lagged regression on 11,095 posts and 79,503 comments from 8,644 users, we found the community functions as a hybrid space: informal diagnostic hub, therapeutic commons, and governed institution. Medical advice dominates discourse (63.3%), while emotional support (7.4%) and moderation (29.2%) create essential infrastructure. Sustained engagement correlates with actionable guidance and affiliation language, not emotional processing. Network analysis revealed structurally cohesive but topically diverse clusters without echo chamber characteristics. Cross-posters (20% of users) who bridge r/maleinfertility and the gender-mixed r/infertility community serve as navigators and mentors, transferring knowledge between spaces. These findings inform trauma-informed design for stigmatized health communities, highlighting role-aware systems and navigation support.
Researchers often attribute social media’s appeal to its ability to elicit flow experiences of deep absorption and effortless engagement. Yet prolonged use has also been linked to distraction, fatigue, and lower mood. This paradox remains poorly understood, in part because prior studies rely on habitual or one-shot reports that ask participants to directly attribute flow to social media. To address this gap, we conducted a five-day field study with 40 participants, combining objective smartphone app tracking with daily reconstructions of flow-inducing activities. Across 673 reported flow occurrences, participants rarely associated flow with social media (2\%). Instead, heavier social media use predicted fewer daily flow occurrences. We further examine this relationship through the effects of social media use on fatigue, mood, and motivation. Altogether, our findings suggest that flow and social media may not align as closely as assumed - and might even compete - underscoring the need for further research.
Online discourse surrounding geopolitical crises is volatile and complex. For example, users can often change their opinions, and apply rationales divergently based on the specific scenario under discussion. This paper explores such stance and rationale divergence in social media discussions. We focus on two major ongoing conflicts: the Russia-Ukraine and Israel-Palestine wars. Through this, we identify a set of users who discuss both conflicts, and then label each user's comments with their stance and associated rationale. Using this unique dataset, we explore how people apply rationales divergently, and evolve their opinions over time. Our research contributes to the CHI community by providing a reusable, rationale-level annotation methodology. Our findings can inform the design of moderation tools, recommender systems, and discussion interfaces. These can be used to surface disagreements, calibrate echo-chamber exposure, and ultimately foster healthier online discourse.
Positive feedback via likes and awards is central to online governance, yet which attributes of users' posts elicit rewards---and how these vary across authors and communities---remains unclear. To examine this, we combine quasi-experimental causal inference with predictive modeling on 11M posts from 100 subreddits. We identify linguistic patterns and stylistic attributes causally linked to rewards, controlling for author reputation, timing, and community context. For example, overtly complicated language, tentative style, and toxicity reduce rewards. We use our set of curated features to train models that can detect highly-upvoted posts with high AUC. Our audit of community guidelines highlights a ``policy-practice gap''---most rules focus primarily on civility and formatting requirements, with little emphasis on the attributes identified to drive positive feedback. These results inform the design of community guidelines, support interfaces that teach users how to craft desirable contributions, and moderation workflows that emphasize positive reinforcement over purely punitive enforcement.
Misinformation interventions are often evaluated under ideal conditions, yet real-world systems are rarely flawless. We report on an online experiment ($N=1,004$) comparing five state-of-the-art interventions -- inoculation, accuracy prompt, community note, fact-check, and indicators -- across TikTok, Telegram, and X. We examined efficacy and user perceptions under flawless and erroneous implementations. Misinformation accompanied by fact-checks and indicators was rated as significantly less accurate, while community notes showed weaker effects. Modality did not significantly influence intervention efficacy and had only minor effects on user acceptance. Community notes, fact-checks, and indicators were rated as more helpful but also more annoying than the less informative accuracy prompts. Notably, the efficacy of interventions disappeared under erroneous conditions. This highlights the crucial role of intervention quality in fostering trust and acceptance. Our findings provide (1) a cross-platform evaluation of interventions and (2) empirical evidence that accuracy and reliability are crucial in complex social media environments.
Corporate organizations face increasingly complex tasks that demand effective team management. A key concept is the Shared Mental Model (SMM), which enables members to maintain performance despite limited communication. Traditional measurements rely on interviews or questionnaires, which are labor-intensive, context-specific, and unsuitable for continuous monitoring. Consequently, leaders lack practical tools to track shared cognition in real time. This paper's empirical analysis shows that only specific categories (e.g., informative exchanges) correlate strongly with SMM, clarifying which forms of communication can influence shared cognition. This insight leads to our proposed approach, which estimates SMM from instant messaging systems like Slack. Our approach categorizes messages into communicative acts using large language models, constructs category-wise communication graphs, and applies a graph neural network for estimation. The model outperforms baselines, demonstrating the feasibility of continuous, scalable monitoring without intrusive surveys. While validated in corporate contexts, the approach extends to education, healthcare, and disaster response domains.
The sudden influx of ``TikTok refugees'' into the Chinese platform RedNote in early 2025 created an unprecedented, large-scale online cross-cultural communication event between the West and East. Although prior HCI research has studied user behavior in social media, most work remains confined to monolingual or single-cultural contexts, leaving cross-linguistic and cultural dynamics underexplored. To address this gap, we focused on a particularly challenging cross-cultural encoding–decoding task that remains stubbornly beyond the reach of machine translation, i.e., foreign newcomers asking Chinese users for Chinese names, and examined how people collectively constructed a digital ``Babel Tower'' through various information encoding strategies. We collected and analyzed over 70,000 comments from RedNote with a creative human-in-the-loop approach using large language models, deriving a systematic framework summarizing cross-cultural information encoding strategies, how they are combined and layered to complicate decoding, and how they relate to engagement metrics such as the number of likes.