Exploring the Quality, Efficiency, and Representative Nature of Responses Across Multiple Survey Panels

要旨

A common practice in HCI research is to conduct a survey to understand the generalizability of findings from smaller-scale qualitative research. These surveys are typically deployed to convenience samples, on low-cost platforms such as Amazon's Mechanical Turk or Survey Monkey, or to more expensive market research panels offered by a variety of premium firms. Costs can vary widely, from hundreds of dollars to tens of thousands of dollars depending on the platform used. We set out to understand the accuracy of ten different survey platforms/panels compared to ground truth data for a total of 6,007 respondents on 80 different aspects of demographic and behavioral questions. We found several panels that performed significantly better than others on certain topics, while different panels provided longer and more relevant open-ended responses. Based on this data, we highlight the benefits and pitfalls of using a variety of survey distribution options in terms of the quality, efficiency, and representative nature of the respondents and the types of responses that can be obtained.

キーワード
Survey
MTurk
SurveyMonkey
Representative
著者
Frank Bentley
Yahoo/Verizon Media, Sunnyvale, CA, USA
Kathleen Margaret O'Neill
Yahoo/Verizon Media, New York, NY, USA
Katie Quehl
Yahoo/Verizon Media, Sunnyvale, CA, USA
Danielle Lottridge
University of Auckland, Auckland, New Zealand
DOI

10.1145/3313831.3376671

論文URL

https://doi.org/10.1145/3313831.3376671

会議: CHI 2020

The ACM CHI Conference on Human Factors in Computing Systems (https://chi2020.acm.org/)

セッション: Methods for understanding & characterising users

Paper session
306AB
5 件の発表
2020-04-28 20:00:00
2020-04-28 21:15:00
日本語まとめ
読み込み中…