Studies show that users do not reliably click more often on headlines classified as clickbait by automated classifiers. Is this because the linguistic criteria (e.g., use of lists or questions) emphasized by the classifiers are not psychologically relevant in attracting interest, or because their classifications are confounded by other unknown factors associated with assumptions of the classifiers? We address these possibilities with three studies—a quasi-experiment using headlines classified as clickbait by three machine-learning models (Study 1), a controlled experiment varying the headline of an identical news story to contain only one clickbait characteristic (Study 2), and a computational analysis of four classifiers using real-world sharing data (Study 3). Studies 1 and 2 revealed that clickbait did not generate more curiosity than non-clickbait. Study 3 revealed that while some headlines generate more engagement, the detectors agreed on a classification only 47% of the time, raising fundamental questions about their validity.
https://doi.org/10.1145/3411764.3445753
The ACM CHI Conference on Human Factors in Computing Systems (https://chi2021.acm.org/)