Algorithms frequently manage online advertising markets, aligning advertisements with article topics. Our work investigates how users perceive the relevance of ads to articles when ads are placed using different keyword extraction algorithms, including Large Language Models (LLMs), and how transparency about the placement procedure influences these perceptions and behavioral intentions. We conducted an online user experiment (N = 498) where ads are matched with news articles using the keyword extraction methods TF-IDF, KeyBERT, and DeepSeek. Results indicate that lightweight methods can match advanced LLMs in delivering high user-perceived ad-article relevance, which in turn fosters click and purchase intentions. However, providing explanations for the ad-article placements by displaying extracted keywords reduces ad interest and thereby weakens behavioral intentions, while simultaneously increasing perceived relevance and moderating algorithm effects. These findings highlight the complex impact of transparency-increasing explanations and suggest that algorithmic precision metrics must be complemented by user perception and intention measures.
ACM CHI Conference on Human Factors in Computing Systems