Participation in crowd-sourced user studies is often driven by monetary incentives. However, standard payment schemes that reward completion unless responses are of poor quality may not invoke sufficient accountability. By compromising user engagement, a lack of accountability can affect data quality and the study's ecological validity. Here, we investigate alternative compensation strategies that manipulate payment framing and evaluate their impact on engagement through task effort, outcomes, and perception. We compared a standard scheme with implicit rejection risk to a reinforced accountability condition with explicit performance-linked deductions, and two dynamic conditions that unexpectedly switched strategies. In a study with 106 Prolific participants on an image captioning task, we found that only shifting from implicit risk to reinforced accountability significantly increased engagement, likely due to loss aversion after participants had already invested time. The reverse shift decreased effort as observed in the standard group. Our results highlight the importance of carefully designing compensation schemes.
ACM CHI Conference on Human Factors in Computing Systems