Knowledge Workers and Crowdworkers

会議の名前
CHI 2024
"Are we all in the same boat?" Customizable and Evolving Avatars to Improve Worker Engagement and Foster a Sense of Community in Online Crowd Work
要旨

Human intelligence continues to be essential in building ground-truth data, training sets, and for evaluating a plethora of systems. The democratized and distributed nature of online crowd work — an attractive and accessible feature that has led to the proliferation of the paradigm — has also meant that crowd workers may not always feel connected to their remote peers. Despite the prevalence of collaborative crowdsourcing practices, workers on many microtask crowdsourcing platforms work on tasks individually and are seldom directly exposed to other crowd workers. In this context, improving worker engagement on microtask crowdsourcing platforms is an unsolved challenge. At the same time, fostering a sense of community among workers can improve the sustainability and working conditions in crowd work. This work aims to increase worker engagement in conversational microtask crowdsourcing by leveraging evolving avatars that workers can customize as they progress through monotonous task batches. We also aim to improve group identification in individual tasks by creating a community space where workers can share their avatars and feelings on task completion. To this end, we carried out a preregistered between-subjects controlled study (N = 680) spanning five experimental conditions and two task types. We found that evolving and customizable worker avatars can increase worker retention. The prospect of sharing worker avatars and task-related feelings in a community space did not consistently affect group identification. Our exploratory analysis indicated that workers who identify themselves as crowd workers experienced greater intrinsic motivation, subjective engagement, and perceived workload. Furthermore, we discuss how task differences shape the relative effectiveness of our interventions. Our findings have important theoretical and practical implications for designing conversational crowdsourcing tasks and in shaping new directions for research to improve crowd worker experiences.

著者
Esra Cemre Su. de Groot
Delft University of Technology, Delft, Netherlands
Ujwal Gadiraju
Delft University of Technology, Delft, Netherlands
論文URL

doi.org/10.1145/3613904.3642429

動画
How Low is Low? Crowdworker Perceptions of Microtask Payments in Work versus Leisure Situations
要旨

Getting paid for completing microtasks online via crowdsourcing (i.e., microworking) has become a widely accepted way to earn money. Despite disputes over low pay rates, however, little is known about the extent of “lowness” and about the perceptions of microworkers concerning the value of micro-paid online activity. In an online survey on a microtask crowdsourcing platform, respondents demonstrated the dual attitudes of work and leisure toward microworking. Although actual wage rates were lower than microworkers expected, the perceived value of the money earned from microworking was paramount. The monetary equivalent, a newly developed metric calibrating microworkers’ subjective evaluations of monetary and nonmonetary dimensions, of microworking outstripped that of alternative activities, the majority of which were leisure activities. Instead of struggling with below-expectation pay rates, microworkers tend to appreciate the value of small gains, especially in contrast to potential losses incurred by alternatives activities.

著者
Ling Jiang
York University, Toronto, Ontario, Canada
Christian Wagner
City University of Hong Kong, Kowloon, Hong Kong, Hong Kong
論文URL

doi.org/10.1145/3613904.3642601

動画
LabelAId: Just-in-time AI Interventions for Improving Human Labeling Quality and Domain Knowledge in Crowdsourcing Systems
要旨

Crowdsourcing platforms have transformed distributed problem-solving, yet quality control remains a persistent challenge. Traditional quality control measures, such as prescreening workers and refining instructions, often focus solely on optimizing economic output. This paper explores just-in-time AI interventions to enhance both labeling quality and domain-specific knowledge among crowdworkers. We introduce LabelAId, an advanced inference model combining Programmatic Weak Supervision (PWS) with FT-Transformers to infer label correctness based on user behavior and domain knowledge. Our technical evaluation shows that our LabelAId pipeline consistently outperforms state-of-the-art ML baselines, improving mistake inference accuracy by 36.7% with 50 downstream samples. We then implemented LabelAId into Project Sidewalk, an open-source crowdsourcing platform for urban accessibility. A between-subjects study with 34 participants demonstrates that LabelAId significantly enhances label precision without compromising efficiency while also increasing labeler confidence. We discuss LabelAId's success factors, limitations, and its generalizability to other crowdsourced science domains.

著者
Chu Li
University of Washington, Seattle, Washington, United States
Zhihan Zhang
University of Washington, Seattle, Washington, United States
Esteban Safranchik
University of Washington, Seattle, Washington, United States
Michael Saugstad
University of Washington, Seattle, Washington, United States
Chaitanyashareef Kulkarni
University of Washington, Seattle, Washington, United States
Xiaoyu Huang
University of California, Berkeley, Berkeley, California, United States
Shwetak Patel
University of Washington, Seattle, Washington, United States
Vikram Iyer
University of Washington, Seattle, Washington, United States
Tim Althoff
University of Washington, Seattle, Washington, United States
Jon E.. Froehlich
University of Washington, Seattle, Washington, United States
論文URL

doi.org/10.1145/3613904.3642089

動画
How Knowledge Workers Think Generative AI Will (Not) Transform Their Industries
要旨

Generative AI is expected to have transformative effects in multiple knowledge industries. To better understand how knowledge workers expect generative AI may affect their industries in the future, we conducted participatory research workshops for seven different industries, with a total of 54 participants across three US cities. We describe participants' expectations of generative AI's impact, including a dominant narrative that cut across the groups' discourse: participants largely envision generative AI as a tool to perform menial work, under human review. Participants do not generally anticipate the disruptive changes to knowledge industries currently projected in common media and academic narratives. Participants do however envision generative AI may amplify four social forces currently shaping their industries: deskilling, dehumanization, disconnection, and disinformation. We describe these forces, and then we provide additional detail regarding attitudes in specific knowledge industries. We conclude with a discussion of implications and research challenges for the HCI community.

著者
Allison Woodruff
Google, Mountain View, California, United States
Renee Shelby
Google, San Francisco, California, United States
Patrick Gage. Kelley
Google, New York, New York, United States
Steven Rousso-Schindler
CSU Long Beach, Long Beach, California, United States
Jamila Smith-Loud
Google, San Francisco, California, United States
Lauren Wilcox
Google, Mountain View, California, United States
論文URL

doi.org/10.1145/3613904.3642700

動画
“Sometimes it’s Like Putting the Track in Front of the Rushing Train”: Having to Be ‘On Call’ for Work Limits the Temporal Flexibility of Crowdworkers
要旨

Research suggests that the temporal flexibility advertised to crowdworkers by crowdsourcing platforms is limited by both client-imposed constraints (e.g., strict completion times) and crowdworkers’ tooling practices (e.g., multitasking). In this article, we explore an additional contributor to workers’ limited temporal flexibility: the design of crowdsourcing platforms, namely requiring crowdworkers to be ‘on call’ for work. We conducted two studies to investigate the impact of having to be ‘on call’ on workers’ schedule control and job control. We find that being ‘on call’ impacted (1) participants’ ability to schedule their time and stick to planned work hours, and (2) the pace at which participants worked and took breaks. The results of the two studies suggest that the ‘on-demand’ nature of crowdsourcing platforms can limit workers’ temporal flexibility by reducing schedule control and job control. We conclude the article by discussing the implications of the results for (a) crowdworkers, (b) crowdsourcing platforms, and (c) the wider platform economy.

著者
Laura Lascau
University College London, London, United Kingdom
Duncan P. Brumby
University College London, London, United Kingdom
Sandy J. J.. Gould
Cardiff University, Cardiff, United Kingdom
Anna L. Cox
University College London, London, United Kingdom
動画