この勉強会は終了しました。ご参加ありがとうございました。
Recent research highlights the potential of crowdsourcing in China. Yet very few studies explore the workplace context and experiences of Chinese crowdworkers. Those that do, focus mainly on the work experiences of solo crowdworkers but do not deal with issues pertaining to the substantial amount of people working in 'crowdfarms'. This article addresses this gap as one of its primary concerns. Drawing on a study that involves 48 participants, our research explores, compares and contrasts the work experiences of solo crowdworkers to those of crowdfarm workers. Our findings illustrate that the work experiences and context of the solo workers and crowdfarm workers are substantially different, with regards to their motivations, the ways they engage with crowdsourcing, the tasks they work on, and the crowdsourcing platforms they utilize. Overall, our study contributes to furthering the understandings on the work experiences of crowdworkers in China.
The rise in popularity of conversational agents has enabled humans to interact with machines more naturally. Recent work has shown that crowd workers in microtask marketplaces can complete a variety of human intelligence tasks (HITs) using conversational interfaces with similar output quality compared to the traditional Web interfaces. In this paper, we investigate the effectiveness of using conversational interfaces to improve worker engagement in microtask crowdsourcing. We designed a text-based conversational agent that assists workers in task execution, and tested the performance of workers when interacting with agents having different conversational styles. We conducted a rigorous experimental study on Amazon Mechanical Turk with 800 unique workers, to explore whether the output quality, worker engagement and the perceived cognitive load of workers can be affected by the conversational agent and its conversational styles. Our results show that conversational interfaces can be effective in engaging workers, and a suitable conversational style has potential to improve worker engagement.
Teachable interfaces can empower end-users to attune machine learning systems to their idiosyncratic characteristics and environment by explicitly providing pertinent training examples. While facilitating control, their effectiveness can be hindered by the lack of expertise or misconceptions. We investigate how users may conceptualize, experience, and reflect on their engagement in machine teaching by deploying a mobile teachable testbed in Amazon Mechanical Turk. Using a performance-based payment scheme, Mechanical Turkers (N=100) are called to train, test, and re-train a robust recognition model in real-time with a few snapshots taken in their environment. We find that participants incorporate diversity in their examples drawing from parallels to how humans recognize objects independent of size, viewpoint, location, and illumination. Many of their misconceptions relate to consistency and model capabilities for reasoning. With limited variation and edge cases in testing, the majority of them do not change strategies on a second training attempt.
Peer-to-peer energy-trading platforms (P2P) have the potential to transform the current energy system. However, research is presently scarce on how people would like to participate in, and what would they expect to gain from, such platforms. We address this gap by exploring these questions in the context of the UK energy market. Using a qualitative interview study, we examine how 45 people with an interest in renewable energy understand P2P. We find that the prospective users value the collective benefits of P2P, and understand participation as a mechanism to support social, ecological and economic benefits for communities and larger groups. Drawing on the findings from the interview analysis, we explore broad design characteristics that a prospective P2P energy trading platform should provide to meet the expectations and concerns voiced by our study participants.
Medical data labeling workflows critically depend on accurate assessments from human experts. Yet human assessments can vary markedly, even among medical experts. Prior research has demonstrated benefits of labeler training on performance. Here we utilized two types of labeler training feedback: highlighting incorrect labels for difficult cases ("individual performance" feedback), and expert discussions from adjudication of these cases. We presented ten generalist eye care professionals with either individual performance alone, or individual performance and expert discussions from specialists. Compared to performance feedback alone, seeing expert discussions significantly improved generalists' understanding of the rationale behind the correct diagnosis while motivating changes in their own labeling approach; and also significantly improved average accuracy on one of four pathologies in a held-out test set. This work suggests that image adjudication may provide benefits beyond developing trusted consensus labels, and that exposure to specialist discussions can be an effective training intervention for medical diagnosis.