この勉強会は終了しました。ご参加ありがとうございました。
Social media can facilitate numerous benefits, ranging from facilitating access to social, instrumental, financial, and other support, to professional development and civic participation. However, these benefits may not be generalizable to all users. Therefore, we conducted an ethnographic case study with eight Autistic young adults, ten staff members, and four parents to understand how Autistic users of social media engage with others, as well as any unintended consequences of use. We leveraged an affordances perspective to understand how Autistic young adults share and consume user-generated content, make connections, and engage in networked interactions with others via social media. We found that they often used a literal interpretation of digital affordances that sometimes led to negative consequences including physical harm, financial loss, social anxiety, feelings of exclusion, and inadvertently damaging their social relationships. We make recommendations for redesigning social media affordances to be more inclusive of neurodiverse users.
Vaccine hesitancy has always been a public health concern, and anti-vaccine campaigns that proliferate disinformation have gained traction across the US in the last 25 years. The demographics of resistance are varied, with health, religious, and, increasingly, political concerns cited as reasons. With the COVID-19 pandemic igniting the fastest development of vaccines to date, mis- and disinformation about them have become inflammatory, with campaigning allegedly including racial targeting. Through a primarily qualitative investigation, this study inductively examines a large online vaccine discussion space that invokes references to the unethical Tuskegee Syphilis Study to understand how tactics of racial targeting of Black Americans might appear publicly. We find that such targeting is entangled with a genuine discussion about medical racism and vaccine hesitancy. Across 12 distinct voices that address race, medical racism, and vaccines, we discuss how mis- and disinformation sit alongside accurate information in a “polyvocal” space.
Online harm is a prevalent issue in adolescents' online lives. Restorative justice teaches us to focus on those who have been harmed, ask what their needs are, and engage in the offending party and community members to collectively address the harm. In this research, we conducted interviews and design activities with harmed adolescents to understand their needs to address online harm. They also identified the key stakeholders relevant to their needs, the desired outcomes, and the preferred timing to achieve them. We identified five central needs of harmed adolescents: sensemaking, emotional support and validation, safety, retribution, and transformation. We find that addressing the needs of those who are harmed online usually requires concerted efforts from multiple stakeholders online and offline. We conclude by discussing how platforms can implement design interventions to meet some of these needs.
Social Network Services (SNSs) evoke diverse affective experiences. While most are positive, many authors have documented both the negative emotions that can result from browsing SNS and their impact: Facebook depression is a common term for the more severe results. However, while the importance of the emotions experienced on SNSs is clear, methods to catalog them, and systems to detect them, are less well developed. Accordingly, this paper reports on two studies using a novel contextually triggered Experience Sampling Method to log surveys immediately after using Instagram, a popular image-based SNS, thus minimizing recall biases. The first study improves our understanding of the emotions experienced while using SNSs. It suggests that common negative experiences relate to appearance comparison and envy. The second study captures smartphone sensor data during Instagram sessions to detect these two emotions, ultimately achieving peak accuracies of 95.78% (binary appearance comparison) and 93.95% (binary envy).
We collected Instagram Direct Messages (DMs) from 100 adolescents and young adults (ages 13-21) who then flagged their own conversations as safe or unsafe. We performed a mixed-method analysis of the media files shared privately in these conversations to gain human-centered insights into the risky interactions experienced by youth. Unsafe conversations ranged from unwanted sexual solicitations to mental health-related concerns, and images shared in unsafe conversations tended to be of people and convey negative emotions, while those shared in regular conversations more often conveyed positive emotions and contained objects. Further, unsafe conversations were significantly shorter, suggesting that youth disengaged when they felt unsafe. Our work uncovers salient characteristics of safe and unsafe media shared in private conversations and provides the foundation to develop automated systems for online risk detection and mitigation.