この勉強会は終了しました。ご参加ありがとうございました。
The success of Wikipedia and other user-generated content communities has been driven by the openness of recruiting volunteers globally, but this openness has also led to a persistent lack of trust in its content. Despite several attempts at developing trust indicators to help readers more quickly and accurately assess the quality of content, challenges remain for practical deployment to general consumers. In this work we identify and address three key challenges: empirically determining which metrics from prior and existing community approaches most impact reader trust; 2) validating indicator placements and designs that are both compact yet noticed by readers; and 3) demonstrating that such indicators can not only lower trust but also increase perceived trust in the system when appropriate. By addressing these, we aim to provide a foundation for future tools that can practically increase trust in user generated content and the sociotechnical systems that generate and maintain them.
Play is an essential part of the human experience and can be found throughout the lifespan. While play has long been of interest to the HCI community, research has often focused on the technologies supporting game play, the potential outcomes of play (e.g., skill-building, health improvements), or play among children. This paper explores what play looks like in online communities that are not specifically game-based and consist primarily of adults. From online ethnographic work of the ARMY (i.e., Adorable Representative M.C. for Youth), fandom of the South Korean musical group BTS, we explore how BTS and ARMY collaboratively construct a playful social environment using various social media platforms. A contribution of this work is to expand our conceptualization of how adults create playful places that are not specifically game-based and highlights the role of socio-technical systems in their community building.
In recent years, political crowdfunding campaigns have emerged through which politicians raise money to fund their election campaigns. Divisive issues discussed in these campaigns may not only motivate donations but also could have a broader priming effect on people's social opinions. In the U.S., more than one-third of the population with moderate opinions show a tendency to swing their opinion based on recent and more accessible events. In this paper, we ask: can such campaigns further prime people's responses to partisan topics, even when we discuss those topics in a non-political context? To answer this question, we analyzed the influence of exposure to a political candidate's crowdfunding campaign on responses to a subsequently seen, unrelated scientific topic that is not inherently political but is seen as partisan in the U.S. (climate change). We found that exposure to an attitude-inconsistent political candidate's crowdfunding campaign (a campaign that is counter to someone's existing political beliefs) can have a significant priming effect on subsequently seen politically charged topics. This effect may occur due to the activation of in-group identity by the candidate's partisan campaign. Guided by these findings, we investigated elements that can mitigate this self-categorization effect. We found that carefully designed content following framing techniques such as schema framing and threat/safety framing can mitigate people's sense of self-categorization toward non-political topics.
Why do some peer production projects do a better job at engaging potential contributors than others? We address this question by comparing three Indian language Wikipedias, namely, —Malayalam, Marathi, and Kannada. We found that although the three projects share goals, technological infrastructure, and a similar set of challenges, Malayalam Wikipedia's community engages language speakers in contributing at a much higher rate than the others. Drawing from a grounded theory analysis of interviews with 18 community participants from the three projects, we found that experience with participatory governance and free/open-source software in the Malayalam community supported high engagement of contributors. Counterintuitively, we found that financial resources intended to increase participation in the Marathi and Kannada communities hindered the growth of these communities. Our findings underscore the importance of social and cultural context in the trajectories of peer production communities.
Online investigations are increasingly conducted by individuals with diverse skill levels and experiences, with mixed results. Novice investigations often result in vigilantism or doxxing, while expert investigations have greater success rates and fewer mishaps. Many of these experts are involved in a community of practice known as Open Source Intelligence (OSINT), with an ethos and set of techniques for conducting investigations using only publicly available data. Through semi-structured interviews with 14 expert OSINT investigators from nine different organizations, we examine the social dynamics of this community, including the collaboration and competition patterns that underlie their investigations. We also describe investigators’ use of and challenges with existing OSINT tools, and implications for the design of social computing systems to better support crowdsourced investigations.