45. Platforms and Algorithms (発表なし)

Fairness Evaluation in Text Classification: Machine Learning Practitioner Perspectives of Individual and Group Fairness
説明

Mitigating algorithmic bias is a critical task in the development and deployment of machine learning models. While several toolkits exist to aid machine learning practitioners in addressing fairness issues, little is known about the strategies practitioners employ to evaluate model fairness and what factors influence their assessment, particularly in the context of text classification. Two common approaches of evaluating the fairness of a model are group fairness and individual fairness. We run a study with Machine Learning practitioners (n=24) to understand the strategies used to evaluate models. Metrics presented to practitioners (group vs. individual fairness) impact which models they consider fair. Participants focused on risks associated with underpredicting/overpredicting and model sensitivity relative to identity token manipulations. We discover fairness assessment strategies involving personal experiences or how users form groups of identity tokens to test model fairness. We provide recommendations for interactive tools for evaluating fairness in text classification.

日本語まとめ
読み込み中…
読み込み中…
It Takes (at least) Two: The Work to Make Romance Work
説明

Digitalization has motivated romance novelists to move from traditional to self-publishing online. However, engagement with flexible and responsive, yet precarious and biased algorithmic systems online pose challenges for novelists. Through surveying and interviewing the novelists, and using the lens of feminist political economy, we investigate how digitalization has impacted the novelists' work practices. Our findings detail the increased agency afforded by self-publishing online, which comes at the expense of performing new forms of work individually, collectively, and with assistance, otherwise performed by publishing houses. We focus on the immaterial, invisible, and unpaid work that the novelists and the ecology of workers surrounding them conducted. We make recommendations for designing digital labor platforms that support the work practices of self-employed digital workers toward a more sustainable, collective, and inclusive future(s) of work.

日本語まとめ
読み込み中…
読み込み中…
Understanding Human Intervention in the Platform Economy: A case study of an indie food delivery service
説明

This paper examines the sociotechnical infrastructure of an “indie” food delivery platform. The platform, Nosh, provides an alternative to mainstream services, such as Doordash and Uber Eats, in several communities in the Western United States. We interviewed 28 stakeholders including restauranteurs, couriers, consumers, and platform administrators. Drawing on infrastructure literature, we learned that the platform is a patchwork of disparate technical systems held together by human intervention. Participants join this platform because they receive greater agency, financial security, and local support. We identify human intervention's key role in making food delivery platform users feel respected. This study provides insights into the affordances, limitations, and possibilities of food delivery platforms designed to prioritize local contexts over transnational scales.

日本語まとめ
読み込み中…
読み込み中…
Attached to "The Algorithm": Making Sense of Algorithmic Precarity on Instagram
説明

This work explores how users navigate the opaque and ever-changing algorithmic processes that dictate visibility on Instagram through the lens of Attachment Theory. We conducted thematic analysis on 1,100 posts and comments on r/Instagram to understand how users engage in collective sensemaking with regards to Instagram’s algorithms, user-perceived punishments, and strategies to counteract algorithmic precarity. We found that the unpredictability in how Instagram rewards or punishes a user can lead to distress, hypervigilance, and a need to appease ``the algorithm’’. We therefore frame these findings through Attachment Theory, drawing upon the metaphor of Instagram as an unreliable paternalistic figure that inconsistently rewards users. User experiences are then contextualized through the lens of anxious, avoidant, disorganized, and secure attachment. We conclude by making suggestions for fostering secure attachment towards the Instagram algorithm, by suggesting potential strategies to help users successfully cope with uncertainty.

日本語まとめ
読み込み中…
読み込み中…
Creator-friendly Algorithms: Behaviors, Challenges, and Design Opportunities in Algorithmic Platforms
説明

In many creator economy platforms, algorithms significantly impact creators’ practices and decisions about their creative expression and monetization. Emerging research suggests that the opacity of the algorithm and platform policies often distract creators from their creative endeavors. To study how algorithmic platforms can be more ‘creator-friendly,’ we conducted a mixed-methods study: interviews (N=14) and a participatory design workshop (N=12) with YouTube creators. Through the interviews, we found how creators’ folk theories of the curation algorithm impact their work strategies — whether they choose to work with or against the algorithm — and the associated challenges in the process. In the workshop, creators explored solution ideas to overcome the aforementioned challenges, such as fostering diverse and creative expressions, achieving success as a creator, and motivating creators to continue their job. Based on these findings, we discuss design opportunities for how algorithmic platforms can support and motivate creators to sustain their creative work.

日本語まとめ
読み込み中…
読み込み中…
Participation and Division of Labor in User-Driven Algorithm Audits: How Do Everyday Users Work Together to Surface Algorithmic Harms?
説明

Recent years have witnessed an interesting phenomenon in which users come together to interrogate potentially harmful algorithmic behaviors they encounter in their everyday lives. Researchers have started to develop theoretical and empirical understandings of these user-driven audits, with a hope to harness the power of users in detecting harmful machine behaviors. However, little is known about users’ participation and their division of labor in these audits, which are essential to support these collective efforts in the future. Through collecting and analyzing 17,984 tweets from four recent cases of user-driven audits, we shed light on patterns of users’ participation and engagement, especially with the top contributors in each case. We also identified the various roles users’ generated content played in these audits, including hypothesizing, data collection, amplification, contextualization, and escalation. We discuss implications for designing tools to support user-driven audits and users who labor to raise awareness of algorithm bias.

日本語まとめ
読み込み中…
読み込み中…