Children and Adults Online Safety

会議の名前
CHI 2024
"Pikachu would electrocute people who are misbehaving": Expert, Guardian and Child Perspectives on Automated Embodied Moderators for Safeguarding Children in Social Virtual Reality
要旨

Automated embodied moderation has the potential to create safer spaces for children in social VR, providing a protective figure that takes action to mitigate harmful interactions. However, little is known about how such moderation should be employed in practice. Through interviews with 16 experts in online child safety and psychology, and workshops with 8 guardians and 13 children, we contribute a comprehensive overview of how Automated Embodied Moderators (AEMs) can safeguard children in social VR. We explore perceived concerns, benefits and preferences across the stakeholder groups and gather first-of-their-kind recommendations and reflections around AEM design. The results stress the need to adapt AEMs to children, whether victims or harassers, based on age and development, emphasising empowerment, psychological impact and humans/guardians-in-the-loop. Our work provokes new participatory design-led directions to consider in the development of AEMs for children in social VR taking child, guardian, and expert insights into account.

著者
Cristina Fiani
University of Glasgow, Glasgow, United Kingdom
Robin Bretin
University of Glasgow, Glasgow, United Kingdom
Shaun Alexander. Macdonald
University of Glasgow, Glasgow, United Kingdom
Mohamed Khamis
University of Glasgow, Glasgow, United Kingdom
Mark McGill
University of Glasgow, Glasgow, Lanarkshire, United Kingdom
論文URL

doi.org/10.1145/3613904.3642144

動画
Tricky vs. Transparent: Towards an Ecologically Valid and Safe Approach for Evaluating Online Safety Nudges for Teens
要旨

HCI research has been at the forefront of designing interventions for protecting teens online; yet, how can we test and evaluate these solutions without endangering the youth we aim to protect? Towards this goal, we conducted focus groups with 20 teens to inform the design of a social media simulation platform and study for evaluating online safety nudges co-designed with teens. Participants evaluated risk scenarios, personas, platform features, and our research design to provide insight regarding the ecological validity of these artifacts. Teens expected risk scenarios to be subtle and tricky, while also higher in risk to be believable. The teens iterated on the nudges to prioritize risk prevention without reducing autonomy, risk coping, and community accountability. For the simulation, teens recommended using transparency with some deceit to balance realism and respect for participants. Our meta-level research provides a teen-centered action plan to evaluate online safety interventions safely and effectively.

著者
Zainab Agha
Vanderbilt University, Nashville, Tennessee, United States
Jinkyung Katie. Park
Vanderbilt University, Nashville, Tennessee, United States
Ruyuan Wan
University of Notre Dame, South Bend, Indiana, United States
Naima Samreen Ali
Vanderbilt University , Nashville , Tennessee, United States
Yiwei Wang
Vanderbilt University, Nashville, Tennessee, United States
Dominic DiFranzo
Lehigh University, Bethlehem, Pennsylvania, United States
Karla Badillo-Urquiola
University of Notre Dame, South Bend, Indiana, United States
Pamela J.. Wisniewski
Vanderbilt University, Nashville, Tennessee, United States
論文URL

doi.org/10.1145/3613904.3642313

動画
Systemization of Knowledge (SoK): Creating a Research Agenda for Human-Centered Real-Time Risk Detection on Social Media Platforms
要旨

Accurate real-time risk identification is vital to protecting social media users from online harm, which has driven research towards advancements in machine learning (ML). While strides have been made regarding the computational facets of algorithms for "real-time'' risk detection, such research has not yet evaluated these advancements through a human-centered lens. To this end, we conducted a systematic literature review of 53 peer-reviewed articles on real-time risk detection on social media. Real-time detection was mainly operationalized as "early'' detection after-the-fact based on pre-defined chunks of data and evaluated based on standard performance metrics, such as timeliness. We identified several human-centered opportunities for advancing current algorithms, such as integrating human insight in feature selection, algorithms' improvement considering human behavior, and utilizing human evaluations. This work serves as a critical call-to-action for the HCI and ML communities to work together to protect social media users before, during, and after exposure to risks.

受賞
Honorable Mention
著者
Ashwaq Alsoubai
Vanderbilt University, Nashville, Tennessee, United States
Jinkyung Katie. Park
Vanderbilt University, Nashville, Tennessee, United States
Sarvech Qadir
Vanderbilt University, Nashville, Tennessee, United States
Gianluca Stringhini
Boston University, Boston, Massachusetts, United States
Afsaneh Razi
Drexel University , Philadelphia, Pennsylvania, United States
Pamela J.. Wisniewski
Vanderbilt University, Nashville, Tennessee, United States
論文URL

doi.org/10.1145/3613904.3642315

動画
"I Know I'm Being Observed:" Video Interventions to Educate Users about Targeted Advertising on Facebook
要旨

Recent work explores how to educate and encourage users to protect their online privacy. We tested the efficacy of short videos for educating users about targeted advertising on Facebook. We designed a video that utilized an emotional appeal to explain risks associated with targeted advertising (fear appeal), and which demonstrated how to use the associated ad privacy settings (digital literacy). We also designed a version of this video which additionally showed the viewer their personal Facebook ad profile, facilitating personal reflection on how they are currently being profiled (reflective learning). We conducted an experiment (n = 127) in which participants watched a randomly assigned video and measured the impact over the following 10 weeks. We found that these videos significantly increased user engagement with Facebook advertising preferences, especially for those who viewed the reflective learning content. However, those who only watched the fear appeal content were more likely to disengage with Facebook as a whole.

受賞
Honorable Mention
著者
Garrett Smith
Brigham Young University, Provo, Utah, United States
Sarah Carson
Brigham Young University, Provo, Utah, United States
Rhea G. Vengurlekar
Bentley University, Waltham, Massachusetts, United States
Stephanie Morales
Brigham Young University, Provo, Utah, United States
Yun-Chieh Tsai
Brigham Young University, Provo, Utah, United States
Rachel George
Brigham Young University, Provo, Utah, United States
Josh Bedwell
Brigham Young University, Provo, Utah, United States
Trevor Jones
Brigham Young University, Provo, Utah, United States
Mainack Mondal
Indian Institute of Technology, Kharagpur, Kharagpur, West Bengal, India
Brian Smith
Brigham Young University, Provo, Utah, United States
Norman Makoto. Su
University of California, Santa Cruz, Santa Cruz, California, United States
Bart Knijnenburg
Clemson University, Clemson, South Carolina, United States
Xinru Page
Brigham Young University, Provo, Utah, United States
論文URL

doi.org/10.1145/3613904.3642885

動画
Sharenting on TikTok: Exploring Parental Sharing Behaviors and the Discourse Around Children's Online Privacy
要旨

Since the inception of social media, parents have been sharing information about their children online. Unfortunately, this ``sharenting'' can expose children to several online and offline risks. Although researchers have studied sharenting on multiple platforms, sharenting on short-form video platforms like TikTok---where posts can contain detailed information, spread quickly, and spark considerable engagement---is understudied. Thus, we provide a targeted exploration of sharenting on TikTok. We analyzed 328 TikTok videos that demonstrate sharenting and 438 videos where TikTok creators discuss sharenting norms. Our results indicate that sharenting on TikTok indeed creates several risks for children, not only within individual posts but also in broader patterns of sharenting that arise when parents repeatedly use children to generate viral content. At the same time, creators voiced sharenting concerns and boundaries that reflect what has been observed on other platforms, indicating the presence of cross-platform norms. Promisingly, we observed that TikTok users are engaging in thoughtful conversations around sharenting and beginning to shift norms toward safer sharenting. We offer concrete suggestions for designers and platforms based on our findings.

著者
Sophie Stephenson
University of Wisconsin-Madison, Madison, Wisconsin, United States
Christopher Nathaniel. Page
Indiana University Bloomington, Bloomington, Indiana, United States
Miranda Wei
University of Washington, Seattle, Washington, United States
Apu Kapadia
Indiana University, Bloomington, Indiana, United States
Franziska Roesner
University of Washington, Seattle, Washington, United States
論文URL

doi.org/10.1145/3613904.3642447

動画