End User Authoring of Personalized Content Classifiers: Comparing Example Labeling, Rule Writing, and LLM Prompting

要旨

Existing tools for laypeople to create personal classifiers often assume a motivated user working uninterrupted in a single, lengthy session. However, users tend to engage with social media casually, with many short sessions on an ongoing, daily basis. To make creating personal classifiers for content curation easier for such users, tools should support rapid initialization and iterative refinement. In this work, we compare three strategies---(1) example labeling, (2) rule writing, and (3) large language model (LLM) prompting---for end users to build personal content classifiers. From an experiment with 37 non-programmers tasked with creating personalized moderation filters, we found that participants preferred different initializing strategies in different contexts, despite LLM prompting's better performance. However, all strategies faced challenges with iterative refinement. To overcome challenges in iterating on their prompts, participants even adopted hybrid approaches such as providing examples as in-context examples or writing rule-like prompts.

著者
Leijie Wang
University of Washington, Seattle, Washington, United States
Kathryn Yurechko
Washington and Lee University, Lexington, Virginia, United States
Pranati Dani
University of Washington, Seattle, Washington, United States
Quan Ze Chen
University of Washington, Seattle, Washington, United States
Amy X.. Zhang
University of Washington, Seattle, Washington, United States
DOI

10.1145/3706598.3713691

論文URL

https://dl.acm.org/doi/10.1145/3706598.3713691

動画

会議: CHI 2025

The ACM CHI Conference on Human Factors in Computing Systems (https://chi2025.acm.org/)

セッション: Writing Support and Content Moderation

G416+G417
7 件の発表
2025-04-29 20:10:00
2025-04-29 21:40:00
日本語まとめ
読み込み中…