この勉強会は終了しました。ご参加ありがとうございました。
Prior research on transparency in content moderation has demonstrated the benefits of offering post-removal explanations to sanctioned users. In this paper, we examine whether the influence of such explanations transcends those who are moderated to the bystanders who witness such explanations. We conduct a quasi-experimental study on two popular Reddit communities (r/AskReddit and r/science) by collecting their data spanning 13 months—a total of 85.5M posts made by 5.9M users. Our causal-inference analyses show that bystanders significantly increase their posting activity and interactivity levels as compared to their matched control set of users. In line with previous applications of Deterrence Theory on digital platforms, our findings highlight that understanding the rationales behind sanctions on other users significantly shapes observers' behaviors. We discuss the theoretical implications and design recommendations of this research, focusing on how investing more efforts in post-removal explanations can help build thriving online communities.
Moderation systems of online games often follow a retributive model inspired by real-world criminal justice, expecting that punishments can help users to reform behavior. However, decades of criminological research show that punishments alone do not work and call for a rehabilitative approach, such as community-based rehabilitation (CBR), to help offenders transform their minds and behavioral patterns. Motivated by this call, we explore how moderated users view punishments in a community context and how other community members respond in League of Legends (LoL), one of the largest online games. Specifically, we focus on how peer support is sought and provided on the /r/LeagueOfLegends subreddit, the largest LoL-related online community. Our content analysis of player discussions characterized the communication between moderated users and peers as informative, constructive, and reflexive. We highlight the importance of involving community in moderation systems and discuss implications for designing CBR mechanisms that could enhance moderation systems.
Recommender systems are increasingly employed by journalistic outlets to deliver personalised news, transforming news curation into a reciprocal yet insufficiently defined process influenced by editors, recommender systems, and individual user actions. To understand the tension in this dynamic and users’ preferences and perceptions of their role in personalised news curation, we conducted a study with UK participants aged 16-34. Building on a preliminary survey and interview study, which revealed a strong desire from participants for increased agency in personalisation, we designed an interactive news recommender provotype (provocative design artefact) which probed the role of agency in news curation with participants (n=16). Findings highlighted a behaviour-intention gap, indicating participants desire for agency yet reluctance to intervene actively in personalisation. Our research offers valuable insights into how users perceive their agency in personalised news curation, underscoring the importance for systems to be designed to support individuals becoming active agents in news personalisation.
AI tools are increasingly deployed in community contexts. However, datasets used to evaluate AI are typically created by developers and annotators outside a given community, which can yield misleading conclusions about AI performance. How might we empower communities to drive the intentional design and curation of evaluation datasets for AI that impacts them? We investigate this question on Wikipedia, an online community with multiple AI-based content moderation tools deployed. We introduce Wikibench, a system that enables communities to collaboratively curate AI evaluation datasets, while navigating ambiguities and differences in perspective through discussion. A field study on Wikipedia shows that datasets curated using Wikibench can effectively capture community consensus, disagreement, and uncertainty. Furthermore, study participants used Wikibench to shape the overall data curation process, including refining label definitions, determining data inclusion criteria, and authoring data statements. Based on our findings, we propose future directions for systems that support community-driven data curation.
Accessibility is an important quality factor of mobile applications. Many studies have shown that, despite the availability of many resources to guide the development of accessible software, most apps and web applications contain many accessibility issues. Some researchers surveyed professionals and organizations to understand the lack of accessibility during software development, but few studies have investigated how developers and organizations respond to accessibility bug reports. Therefore, this paper analyzes accessibility bug reports posted in the Chromium repository to understand how developers and organizations handle them. More specifically, we want to determine the frequency of accessibility bug reports over time, the time-to-fix compared to traditional bug reports (e.g., functional bugs), and the types of accessibility barriers reported. Results show that the frequency of accessibility reports has increased over the years, and accessibility bugs take longer to be fixed, as they tend to be given low priority.