On the state of reporting in crowdsourcing experiments and a checklist to aid current practices

要旨

Crowdsourcing is being increasingly adopted as a platform to run studies with human subjects. Running a crowdsourcing experiment involves several choices and strategies to successfully port an experimental design into an otherwise uncontrolled research environment, e.g., sampling crowd workers, mapping experimental conditions to micro-tasks, or ensure quality contributions. While several guidelines inform researchers in these choices, guidance of how and what to report from crowdsourcing experiments has been largely overlooked. If under-reported, implementation choices constitute variability sources that can affect the experiment's reproducibility and prevent a fair assessment of research outcomes. In this paper, we examine the current state of reporting of crowdsourcing experiments and offer guidance to address associated reporting issues. We start by identifying sensible implementation choices, relying on existing literature and interviews with experts, to then extensively analyze the reporting of 171 crowdsourcing experiments. Informed by this process, we propose a checklist for reporting crowdsourcing experiments.

著者
Jorge Ramirez
University of Trento, Trento, Italy
Burcu Sayin
University of Trento, Trento, Italy
Marcos Baez
Université Claude Bernard Lyon 1, Villeurbanne, France
Fabio Casati
University of Trento, Trento, Italy
Luca Cernuzzi
Catholic University of Asuncion, Asuncion, Paraguay
boualem Benatallah
UNSW Sydney, Sydney, NSW, Australia
Gianluca Demartini
University of Queensland, Brisbane, Australia
論文URL

https://doi.org/10.1145/3479531

動画

会議: CSCW2021

The 24th ACM Conference on Computer-Supported Cooperative Work and Social Computing

セッション: Methods and Design Approaches

Papers Room D
8 件の発表
2021-10-26 23:30:00
2021-10-27 01:00:00