“They’ve Over-Emphasized That One Search”: Controlling Unwanted Content on TikTok's For You Page

要旨

Modern algorithmic recommendation systems seek to engage users through behavioral content-interest matching. While many platforms recommend content based on engagement metrics, others like TikTok deliver interest-based content, resulting in recommendations perceived to be hyper-personalized compared to other platforms. TikTok's robust recommendation engine has led some users to suspect that the algorithm knows users “better than they know themselves," but this is not always true. In this paper, we explore TikTok users’ perceptions of recommended content on their For You Page (FYP), specifically calling attention to unwanted recommendations. Through qualitative interviews of 14 current and former TikTok users, we find themes of frustration with recommended content, attempts to rid themselves of unwanted content, and various degrees of success in eschewing such content. We discuss implications in the larger context of folk theorization and contribute concrete tactical and behavioral examples of \textit{algorithmic persistence}.

著者
Julie A.. Vera
University of Washington, Seattle, Washington, United States
Sourojit Ghosh
University of Washington, Seattle, Seattle, Washington, United States
DOI

10.1145/3706598.3713666

論文URL

https://dl.acm.org/doi/10.1145/3706598.3713666

動画

会議: CHI 2025

The ACM CHI Conference on Human Factors in Computing Systems (https://chi2025.acm.org/)

セッション: Dark Patterns and Content Moderation

G304
6 件の発表
2025-04-30 23:10:00
2025-05-01 00:40:00
日本語まとめ
読み込み中…