The Challenge of Variable Effort Crowdsourcing & How Visible Gold Can Help

要旨

We consider a class of variable effort human annotation tasks in which the number of labels required per item can greatly vary (e.g., finding all faces in an image, named entities in a text, bird calls in an audio recording, etc.). In such tasks, some items require far more effort than others to annotate. Furthermore, the per-item annotation effort is not known until after each item is annotated since determining the number of labels required is an implicit part of the annotation task itself. On an image bounding-box task with crowdsourced annotators, we show that annotator accuracy and recall consistently drop as effort increases. We hypothesize reasons for this drop and investigate a set of approaches to counteract it. Firstly, we benchmark on this task a set of general best-practice methods for quality crowdsourcing. Notably, only one of these methods actually improves quality: the use of visible gold questions that provide periodic feedback to workers on their accuracy as they work. Given these promising results, we then investigate and evaluate variants of the visible gold approach, yielding further improvement. Final results show a 7% improvement in bounding-box accuracy over the baseline. We discuss the generality of the visible gold approach and promising directions for future research.

著者
Danula Hettiachchi
Amazon, Seattle, Washington, United States
Mike Schaekermann
Amazon, Toronto, Ontario, Canada
Tristan J. McKinney
Amazon, Palo Alto, California, United States
Matthew Lease
Amazon, Seattle, Washington, United States
論文URL

https://doi.org/10.1145/3476073

動画

会議: CSCW2021

The 24th ACM Conference on Computer-Supported Cooperative Work and Social Computing

セッション: Crowds and Collaboration

Papers Room D
8 件の発表
2021-10-26 20:30:00
2021-10-26 22:00:00