Recent discussions at the intersection of journalism, HCI, and human-centered computing ask how technologies can help create reader-oriented news experiences. The current paper takes up this initiative by focusing on immigrant readers, a group who reports significant difficulties engaging with mainstream news yet has received limited attention in prior research. We report findings from our co-design research with eleven immigrant readers living in the United States and seven journalists working in the same region, aiming to enhance the news experience of the former. Data collected from all participants revealed an “unaddressed-or-unaccountable” paradox that challenges value alignment across immigrant readers and journalists. This paradox points to four metaphors regarding how conversational AI agents can be designed to assist news reading. Each metaphor requires conversational AI, journalists, and immigrant readers to coordinate their shared responsibilities in a distinct manner. These findings provide insights into reader-oriented news experiences with AI in the loop.
Journalists rely on their agency---the ability to exercise independent judgment in alignment with their values---to fulfill their democratic social role. In this study, we investigate how LLM-infused writing tools reshape journalists' agency in editorial decision making. In interviews with 20 science journalists, we presented four hypothetical LLM-infused writing tools representing a range of possible design space configurations. We find that journalists are selectively willing to cede control: they view AI that gathers information or offers feedback as supporting their efficiency by automating execution while leaving decision making intact. In contrast, they see AI that generates core ideas or drafts as a threat to their autonomy, skill development, self-fulfillment, and professional relationships. This sensitivity extends to seemingly automatable tasks such as manipulating writing voice with AI, which are seen as reducing opportunities for reflection and critical thinking. We discuss the implications of these findings for design that preserves journalistic agency in the moment, and over the long term.
Declining newspaper revenues prompt local newsrooms to adopt automation to maintain efficiency and keep the community informed. However, current research provides a limited understanding of how local journalists work with digital data and which newsroom processes would benefit most from AI-supported (data) reporting. To bridge this gap, we conducted 21 semi-structured interviews with local journalists in Germany. Our study investigates how local journalists use data and AI (RQ1); the challenges they encounter when interacting with data and AI (RQ2); and the self-perceived opportunities of AI-supported reporting systems through the lens of discursive design (RQ3). Our findings reveal that local journalists do not fully leverage AI's potential to support data-related work. Despite local journalists’ limited awareness of AI's capabilities, they are willing to use it to process data and discover stories. Finally, we provide recommendations for improving AI-supported reporting in the context of local news, grounded in the journalists’ socio-technical perspective and their imagined AI future capabilities.
News outlets are currently adopting AI to summarize news stories and experiment with conversational agents to convey news. In this paper, we explore this emerging practice, in which an AI-powered agent retells news in dialogue with the user rather than presenting them with a fixed narrative. In a co-speculation workshop with industry professionals and through a field trial of an LLM-based conversational news agent probe, we explore the design space of this conversational news format and examine the experiences, insights, and concerns reported by the participants. Our contribution is twofold. First, we identify five dimensions that shape how news can be retold by a conversational agent. Second, we provide a detailed empirical account of how users experience conversational news as clear and easy to follow, enabling them to probe and question stories in new ways, and address how these interactions are marked by tensions around trust, accuracy, and transparency.
Each day, individuals set behavioral goals such as eating healthier, exercising regularly, or increasing productivity. While psychological frameworks (i.e., goal setting and implementation intentions) can be helpful, they often need structured external support, which interactive technologies can provide. We thus explored how large language model (LLM)-based chatbots can apply these frameworks to guide users in setting more effective goals. We conducted a preregistered randomized controlled experiment ($N = 543$) comparing chatbots with different combinations of three design features: guidance, suggestions, and feedback. We evaluated goal quality using subjective and objective measures. We found that, while guidance is already helpful, it is the addition of feedback that makes LLM-based chatbots effective in supporting participants’ goal setting. In contrast, adaptive suggestions were less effective. Altogether, our study shows how to design chatbots by operationalizing psychological frameworks to provide effective support for reaching behavioral goals.
As the use of LLM chatbots by students and researchers becomes more prevalent, universities are pressed to develop AI strategies. One strategy that many universities pursue is to customize pre-trained LLM-as-a-service (LLMaaS) chatbots. While most studies on LLMaaS chatbots prioritize technical adaptations, these systems are often mainly characterized by user-salient front-end customizations, e.g., interface changes. Yet, no existing studies have examined how users perceive such systems compared to commercial LLM chatbots. In a field study, we investigate how students and employees (N = 526) at a German university perceive and use their institution's customized LLMaaS chatbot compared to ChatGPT. Participants using both systems (n = 116) reported greater trust, higher perceived privacy, and less perceived hallucinations with their university's customized LLMaaS chatbot compared to ChatGPT. We discuss implications for research on users' trustworthiness assessment process, and offer guidance for the design and deployment of LLMaaS chatbots.