AI-driven recommender systems are often perceived as personalization black boxes, limiting users’ ability to understand how their data shapes content (information asymmetry) or to influence system behavior meaningfully (power asymmetry). This study explores how design can strengthen user agency by integrating transparency with actionable control. We developed a provotype that introduces new interface features for managing data use, discovering varied content, and configuring context-based recommending modes. The walkthroughs and interviews with 19 participants show how these features help users interpret personalization signals, understand how their actions influence outcomes, address concerns from unwanted inference to narrow feeds (e.g., filter bubbles), and build trust in the system. We also identify strategies for promoting adoption and awareness of agency-enhancing features. Overall, our findings reaffirm users’ desire for active influence over personalization and contribute concrete interface mechanisms with empirical insights for designing recommender systems that foreground user autonomy and fairness in AI-driven content delivery.
ACM CHI Conference on Human Factors in Computing Systems