この勉強会は終了しました。ご参加ありがとうございました。
Mental health discussions on public forums influence the perceptions of people. Negative consequences may result from hostile and "othering" portrayals of people with mental disorders. Adopting the lens of Moral Foundation Theory (MFT), we study framings of mental health discourse on Twitter and News, and how moral underpinnings abate or exacerbate stigma. We adopted a large language model based representation framework to score 13,277,115 public tweets and 21,167 news articles against MFT's five foundations. We found discussions on Twitter to demonstrate compassion, justice and equity-centered moral values for those suffering from mental illness, in contrast to those on News. That said, stigmatized discussions appeared on both Twitter and News, with news articles being more stigmatizing than tweets. We discuss implications for public health authorities to refine measures for safe reporting of mental health, and for social media platforms to design affordances that enable empathetic discourse.
Social media platforms are a place where people look for information and social support for mental health, resulting in both positive and negative effects on users. TikTok has gained notoriety for an abundance of mental health content and discourse. We present findings from a semi-structured interview study with 16 participants about mental health content and participants' perceptions of community on TikTok. We find that TikTok's community structure is permeable, allowing for self-discovery and understanding not found in traditional online communities. However, participants are wary of mental health information due to conflicts between a creator's vulnerability and credibility. Our interviews suggest that the ``For You Page" is a runaway train that encourages diverse community and content engagement but also displays harmful content that participants feel they cannot escape. We propose design implications to support better mental health, as well as implications for social computing research on community in algorithmic landscapes.
Extensive research has been published on the conversational factors of effective volunteer peer counseling on online mental health platforms (OMHPs). However, studies differ in how they define and measure success outcomes, with most prior work examining only a single success metric. In this work, we model the relationship between previously reported linguistic predictors of effective counseling with four outcomes following a peer-to-peer session on a single OMHP: retention in the community, following up on a previous session with a counselor, users' evaluation of a counselor, and changes in users' mood. Results show that predictors correlate negatively with community retention but positively with users following up with and giving higher evaluations to individual counselors. We suggest actionable insights for therapy platform design and outcome measurement based on findings that the relationship between predictors and outcomes of successful conversations depends on differences in measurement construct and operationalization.
Media coverage has historically played an influential and often stigmatizing role in the public's understanding of mental illness through harmful language and inaccurate portrayals of those with mental health issues. However, it is unknown how and to what extent media events may affect stigma in online discourse regarding mental health. In this study, we examine a highly publicized event -- the celebrity defamation trial between Johnny Depp and Amber Heard -- to uncover how stigmatizing and destigmatizing language on Twitter changed during and after the course of the trial. Using causal impact and language analysis methods, we provided a first look at how external events can lead to significantly greater levels of stigmatization and lower levels of destigmatization on Twitter towards not only particular disorders targeted in the coverage of external events but also general mental health discourse.
Online harassment is a global problem. This article examines perceptions of harm and preferences for remedies associated with online harassment with nearly 4000 participants in 14 countries around the world. The countries in this work reflect a range of identities and values, with a focus on those outside of North American and European contexts. Results show that perceptions of harm are higher among participants from all countries studied compared to the United States. Non-consensual sharing of sexual photos is consistently rated as harmful in all countries, while insults and rumors are perceived as more harmful in non-U.S. countries, especially harm to family reputation. Lower trust in other people and lower trust in sense of safety in one's neighborhood correlate with increased perceptions of harm of online harassment. In terms of remedies, participants in most countries prefer monetary compensation, apologies, and publicly revealing offender's identities compared to the U.S. Social media platform design and policy must consider regional values and norms, which may depart from U.S. centric-approaches.
Trauma is a common experience affecting over 70 percent of adults globally, with many survivors seeking support from online communities. Yet few studies explore the online experiences of muted groups who lack the words to describe or name their trauma. We pull from 29 in-depth interviews with muted trauma survivors who belong to online communities where trauma narratives are commonplace. Using a spinning top metaphor, we model the sociotechnical nature of the disclosure decision-making process, uncovering new affordances, such as indirect feedback and trans- portability in online platforms. Findings challenge prior notions of community engagement and algorithmic filter bubbles, highlight- ing the potential for algorithmic filters to counteract societal filters for muted groups. We conclude with design recommendations to make online spaces safer for trauma survivors.