"Pikachu would electrocute people who are misbehaving": Expert, Guardian and Child Perspectives on Automated Embodied Moderators for Safeguarding Children in Social Virtual Reality
説明

Automated embodied moderation has the potential to create safer spaces for children in social VR, providing a protective figure that takes action to mitigate harmful interactions. However, little is known about how such moderation should be employed in practice.

Through interviews with 16 experts in online child safety and psychology, and workshops with 8 guardians and 13 children, we contribute a comprehensive overview of how Automated Embodied Moderators (AEMs) can safeguard children in social VR.

We explore perceived concerns, benefits and preferences across the stakeholder groups and gather first-of-their-kind recommendations and reflections around AEM design.

The results stress the need to adapt AEMs to children, whether victims or harassers, based on age and development, emphasising empowerment, psychological impact and humans/guardians-in-the-loop. Our work provokes new participatory design-led directions to consider in the development of AEMs for children in social VR taking child, guardian, and expert insights into account.

日本語まとめ
読み込み中…
読み込み中…
Tricky vs. Transparent: Towards an Ecologically Valid and Safe Approach for Evaluating Online Safety Nudges for Teens
説明

HCI research has been at the forefront of designing interventions for protecting teens online; yet, how can we test and evaluate these solutions without endangering the youth we aim to protect? Towards this goal, we conducted focus groups with 20 teens to inform the design of a social media simulation platform and study for evaluating online safety nudges co-designed with teens. Participants evaluated risk scenarios, personas, platform features, and our research design to provide insight regarding the ecological validity of these artifacts. Teens expected risk scenarios to be subtle and tricky, while also higher in risk to be believable. The teens iterated on the nudges to prioritize risk prevention without reducing autonomy, risk coping, and community accountability. For the simulation, teens recommended using transparency with some deceit to balance realism and respect for participants. Our meta-level research provides a teen-centered action plan to evaluate online safety interventions safely and effectively.

日本語まとめ
読み込み中…
読み込み中…
Systemization of Knowledge (SoK): Creating a Research Agenda for Human-Centered Real-Time Risk Detection on Social Media Platforms
説明

Accurate real-time risk identification is vital to protecting social media users from online harm, which has driven research towards advancements in machine learning (ML). While strides have been made regarding the computational facets of algorithms for "real-time'' risk detection, such research has not yet evaluated these advancements through a human-centered lens. To this end, we conducted a systematic literature review of 53 peer-reviewed articles on real-time risk detection on social media. Real-time detection was mainly operationalized as "early'' detection after-the-fact based on pre-defined chunks of data and evaluated based on standard performance metrics, such as timeliness. We identified several human-centered opportunities for advancing current algorithms, such as integrating human insight in feature selection, algorithms' improvement considering human behavior, and utilizing human evaluations. This work serves as a critical call-to-action for the HCI and ML communities to work together to protect social media users before, during, and after exposure to risks.

日本語まとめ
読み込み中…
読み込み中…
"I Know I'm Being Observed:" Video Interventions to Educate Users about Targeted Advertising on Facebook
説明

Recent work explores how to educate and encourage users to protect their online privacy. We tested the efficacy of short videos for educating users about targeted advertising on Facebook. We designed a video that utilized an emotional appeal to explain risks associated with targeted advertising (fear appeal), and which demonstrated how to use the associated ad privacy settings (digital literacy). We also designed a version of this video which additionally showed the viewer their personal Facebook ad profile, facilitating personal reflection on how they are currently being profiled (reflective learning). We conducted an experiment (n = 127) in which participants watched a randomly assigned video and measured the impact over the following 10 weeks. We found that these videos significantly increased user engagement with Facebook advertising preferences, especially for those who viewed the reflective learning content. However, those who only watched the fear appeal content were more likely to disengage with Facebook as a whole.

日本語まとめ
読み込み中…
読み込み中…
Sharenting on TikTok: Exploring Parental Sharing Behaviors and the Discourse Around Children's Online Privacy
説明

Since the inception of social media, parents have been sharing information about their children online. Unfortunately, this ``sharenting'' can expose children to several online and offline risks. Although researchers have studied sharenting on multiple platforms, sharenting on short-form video platforms like TikTok---where posts can contain detailed information, spread quickly, and spark considerable engagement---is understudied. Thus, we provide a targeted exploration of sharenting on TikTok. We analyzed 328 TikTok videos that demonstrate sharenting and 438 videos where TikTok creators discuss sharenting norms. Our results indicate that sharenting on TikTok indeed creates several risks for children, not only within individual posts but also in broader patterns of sharenting that arise when parents repeatedly use children to generate viral content. At the same time, creators voiced sharenting concerns and boundaries that reflect what has been observed on other platforms, indicating the presence of cross-platform norms. Promisingly, we observed that TikTok users are engaging in thoughtful conversations around sharenting and beginning to shift norms toward safer sharenting. We offer concrete suggestions for designers and platforms based on our findings.

日本語まとめ
読み込み中…
読み込み中…