Interface Design for Crowdsourcing Hierarchical Multi-Label Text Annotations

要旨

Human data labeling is an important and expensive task at the heart of supervised learning systems. Hierarchies help humans understand and organize concepts. We ask whether and how concept hierarchies can inform the design of annotation interfaces to improve labeling quality and efficiency. We study this question through annotation of vaccine misinformation, where the labeling task is difficult and highly subjective. We investigate 6 user interface designs for crowdsourcing hierarchical labels by collecting over 18,000 individual annotations. Under a fixed budget, integrating hierarchies into the design improves crowdsource workers' F1 scores. We attribute this to (1) Grouping similar concepts, improving F1 scores by +0.16 over random groupings, (2) Strong relative performance on high-difficulty examples (relative F1 score difference of +0.40), and (3) Filtering out obvious negatives, increasing precision by +0.07. Ultimately, labeling schemes integrating the hierarchy outperform those that do not - achieving mean F1 of 0.70.

著者
Rickard Stureborg
Duke University, Durham, North Carolina, United States
Bhuwan Dhingra
Duke University, Durham, North Carolina, United States
Jun Yang
Duke University, Durham, North Carolina, United States
論文URL

https://doi.org/10.1145/3544548.3581431

動画

会議: CHI 2023

The ACM CHI Conference on Human Factors in Computing Systems (https://chi2023.acm.org/)

セッション: Interaction with AI & Robots

Hall A
6 件の発表
2023-04-25 01:35:00
2023-04-25 03:00:00