Platforms and Algorithms

会議の名前
CHI 2023
Fairness Evaluation in Text Classification: Machine Learning Practitioner Perspectives of Individual and Group Fairness
要旨

Mitigating algorithmic bias is a critical task in the development and deployment of machine learning models. While several toolkits exist to aid machine learning practitioners in addressing fairness issues, little is known about the strategies practitioners employ to evaluate model fairness and what factors influence their assessment, particularly in the context of text classification. Two common approaches of evaluating the fairness of a model are group fairness and individual fairness. We run a study with Machine Learning practitioners (n=24) to understand the strategies used to evaluate models. Metrics presented to practitioners (group vs. individual fairness) impact which models they consider fair. Participants focused on risks associated with underpredicting/overpredicting and model sensitivity relative to identity token manipulations. We discover fairness assessment strategies involving personal experiences or how users form groups of identity tokens to test model fairness. We provide recommendations for interactive tools for evaluating fairness in text classification.

著者
Zahra Ashktorab
IBM Research, Yorktown Heights, New York, United States
Benjamin Hoover
IBM Research AI, Cambridge, Massachusetts, United States
Mayank Agarwal
IBM Research, Cambridge, Massachusetts, United States
Casey Dugan
IBM Research, Cambridge, Massachusetts, United States
Werner Geyer
IBM Research, Cambridge, Massachusetts, United States
Hao Bang Yang
Massachusetts Institute of Technology, Cambridge, Massachusetts, United States
Mikhail Yurochkin
IBM Research, Cambridge, Massachusetts, United States
論文URL

https://doi.org/10.1145/3544548.3581227

動画
It Takes (at least) Two: The Work to Make Romance Work
要旨

Digitalization has motivated romance novelists to move from traditional to self-publishing online. However, engagement with flexible and responsive, yet precarious and biased algorithmic systems online pose challenges for novelists. Through surveying and interviewing the novelists, and using the lens of feminist political economy, we investigate how digitalization has impacted the novelists' work practices. Our findings detail the increased agency afforded by self-publishing online, which comes at the expense of performing new forms of work individually, collectively, and with assistance, otherwise performed by publishing houses. We focus on the immaterial, invisible, and unpaid work that the novelists and the ecology of workers surrounding them conducted. We make recommendations for designing digital labor platforms that support the work practices of self-employed digital workers toward a more sustainable, collective, and inclusive future(s) of work.

著者
Vishal Sharma
Georgia Institute of Technology, Atlanta, Georgia, United States
Kirsten Bray
Georgia Tech, Atlanta, Georgia, United States
Neha Kumar
Georgia Tech, Atlanta, Georgia, United States
Rebecca E. Grinter
Georgia Institute of Technology, Atlanta, Georgia, United States
論文URL

https://doi.org/10.1145/3544548.3580709

動画
Understanding Human Intervention in the Platform Economy: A case study of an indie food delivery service
要旨

This paper examines the sociotechnical infrastructure of an “indie” food delivery platform. The platform, Nosh, provides an alternative to mainstream services, such as Doordash and Uber Eats, in several communities in the Western United States. We interviewed 28 stakeholders including restauranteurs, couriers, consumers, and platform administrators. Drawing on infrastructure literature, we learned that the platform is a patchwork of disparate technical systems held together by human intervention. Participants join this platform because they receive greater agency, financial security, and local support. We identify human intervention's key role in making food delivery platform users feel respected. This study provides insights into the affordances, limitations, and possibilities of food delivery platforms designed to prioritize local contexts over transnational scales.

著者
Samantha Dalal
University of Colorado Boulder, Boulder, Colorado, United States
Ngan Chiem
Princeton University, Princeton, New Jersey, United States
Nikoo Karbassi
Princeton University, Princeton, New Jersey, United States
Yuhan Liu
Princeton University, Princeton, New Jersey, United States
Andrés Monroy-Hernández
Princeton University, Princeton, New Jersey, United States
論文URL

https://doi.org/10.1145/3544548.3581517

動画
Attached to "The Algorithm": Making Sense of Algorithmic Precarity on Instagram
要旨

This work explores how users navigate the opaque and ever-changing algorithmic processes that dictate visibility on Instagram through the lens of Attachment Theory. We conducted thematic analysis on 1,100 posts and comments on r/Instagram to understand how users engage in collective sensemaking with regards to Instagram’s algorithms, user-perceived punishments, and strategies to counteract algorithmic precarity. We found that the unpredictability in how Instagram rewards or punishes a user can lead to distress, hypervigilance, and a need to appease ``the algorithm’’. We therefore frame these findings through Attachment Theory, drawing upon the metaphor of Instagram as an unreliable paternalistic figure that inconsistently rewards users. User experiences are then contextualized through the lens of anxious, avoidant, disorganized, and secure attachment. We conclude by making suggestions for fostering secure attachment towards the Instagram algorithm, by suggesting potential strategies to help users successfully cope with uncertainty.

著者
Yim Register
University of Washington, Seattle, Washington, United States
Lucy Qin
Brown University, Providence, Rhode Island, United States
Amanda Baughan
University of Washington, Seattle, Washington, United States
Emma S.. Spiro
University of Washington, Seattle, Washington, United States
論文URL

https://doi.org/10.1145/3544548.3581257

動画
Creator-friendly Algorithms: Behaviors, Challenges, and Design Opportunities in Algorithmic Platforms
要旨

In many creator economy platforms, algorithms significantly impact creators’ practices and decisions about their creative expression and monetization. Emerging research suggests that the opacity of the algorithm and platform policies often distract creators from their creative endeavors. To study how algorithmic platforms can be more ‘creator-friendly,’ we conducted a mixed-methods study: interviews (N=14) and a participatory design workshop (N=12) with YouTube creators. Through the interviews, we found how creators’ folk theories of the curation algorithm impact their work strategies — whether they choose to work with or against the algorithm — and the associated challenges in the process. In the workshop, creators explored solution ideas to overcome the aforementioned challenges, such as fostering diverse and creative expressions, achieving success as a creator, and motivating creators to continue their job. Based on these findings, we discuss design opportunities for how algorithmic platforms can support and motivate creators to sustain their creative work.

著者
Yoonseo Choi
KAIST, Daejeon, Korea, Republic of
Eun Jeong Kang
Cornell University, Ithaca, New York, United States
Min Kyung Lee
University of Texas at Austin, Austin, Texas, United States
Juho Kim
KAIST, Daejeon, Korea, Republic of
論文URL

https://doi.org/10.1145/3544548.3581386

動画
Participation and Division of Labor in User-Driven Algorithm Audits: How Do Everyday Users Work Together to Surface Algorithmic Harms?
要旨

Recent years have witnessed an interesting phenomenon in which users come together to interrogate potentially harmful algorithmic behaviors they encounter in their everyday lives. Researchers have started to develop theoretical and empirical understandings of these user-driven audits, with a hope to harness the power of users in detecting harmful machine behaviors. However, little is known about users’ participation and their division of labor in these audits, which are essential to support these collective efforts in the future. Through collecting and analyzing 17,984 tweets from four recent cases of user-driven audits, we shed light on patterns of users’ participation and engagement, especially with the top contributors in each case. We also identified the various roles users’ generated content played in these audits, including hypothesizing, data collection, amplification, contextualization, and escalation. We discuss implications for designing tools to support user-driven audits and users who labor to raise awareness of algorithm bias.

著者
Rena Li
Carnegie Mellon University, Pittsburgh, Pennsylvania, United States
Sara Kingsley
Carnegie Mellon University, Pittsburgh, Pennsylvania, United States
Chelsea Fan
Carnegie Mellon University, Pittsburgh, Pennsylvania, United States
Proteeti Sinha
Carnegie Mellon University, Pittsburgh, Pennsylvania, United States
Nora Wai
Carnegie Mellon University, Pittsburgh, Pennsylvania, United States
Jaimie Lee
Carnegie Mellon University, Pittsburgh, Pennsylvania, United States
Hong Shen
Carnegie Mellon University , Pittsburgh, Pennsylvania, United States
Motahhare Eslami
Carnegie Mellon University, Pittsburgh, Pennsylvania, United States
Jason I. Hong
Carnegie Mellon University, Pittsburgh, Pennsylvania, United States
論文URL

https://doi.org/10.1145/3544548.3582074

動画