Biased news articles can distort readers' perceptions by presenting information in a way that favors or disfavors a particular point of view. Subtly embedded in the text, these biased news articles can shape our views daily without people even realizing it. To address this issue, we propose BIASsist, an LLM-based approach designed to mitigate bias in news articles. Based on existing research, we defined six types of bias and introduced three assistive components—identification, explanation, and neutralization—to provide a broader range of bias information and enhance readers' bias-awareness. We conducted a mixed-method study with 36 participants to evaluate the effectiveness of BIASsist. The results show participants' bias awareness significantly improved and their interest in identifying bias increased. Participants also tended to engage more actively in critically evaluating articles. Based on these findings, we discuss its potential to improve media literacy and critical thinking in today's information overload era.
https://dl.acm.org/doi/10.1145/3706598.3713531
Users often have access to multiple forecasts regarding an event. Different forecasts incorporate different assumptions and epistemic information. A growing body of work argues against decision-making solely based on expected utility maximisation strategies in multiple forecasts scenarios, in favour of other strategies such as the maximin expected utility. In this work, we compare two different approaches for depicting epistemic uncertainty—ensembles (a direct representation of multiple forecasts) and p-boxes (a representation which only communicates the bounds of epistemic uncertainty)—in plots where individual distributions are represented as cumulative distribution plots (CDFs). We conduct three experiments to investigate the impact of the visual representation on the decision-making strategies that people adopt. Our results suggest that participants adopt conservative decision-making strategies (i.e. place greater weight on the worst-case forecast than the best-case forecast) for both p-boxes and ensembles if the set of forecasts are uniformly distributed. However, if a majority of the forecasts are clustered near one of the bounds, participants may discount the forecast which appears as a visual outlier.
https://dl.acm.org/doi/10.1145/3706598.3713725
The emergence of Generative AI features in news applications may radically change news consumption and challenge journalistic practices. To explore the future potentials and risks of this understudied area, we created six design fictions depicting scenarios such as virtual companions delivering news summaries to the user, AI providing context to news topics, and content being transformed into other formats on demand. The fictions, discussed with a multi-disciplinary group of experts, enabled a critical examination of the diverse ethical, societal, and journalistic implications of AI shaping this everyday activity. The discussions raised several concerns, suggesting that such consumer-oriented AI applications can clash with journalistic values and processes. These include fears that neither consumers nor AI could successfully balance engagement, objectivity, and truth, leading to growing detachment from shared understanding. We offer critical insights into the potential long-term effects to guide design efforts in this emerging application area of GenAI.
https://dl.acm.org/doi/10.1145/3706598.3713804
Decision-making with information displays is a key focus of research in areas like human-AI collaboration and data visualization. However, what constitutes a decision problem, and what is required for an experiment to conclude that decisions are flawed, remain imprecise. We present a widely applicable definition of a decision problem synthesized from statistical decision theory and information economics. We claim that to attribute loss in human performance to bias, an experiment must provide the information that a rational agent would need to identify the normative decision. We evaluate whether recent empirical research on AI-assisted decisions achieves this standard. We find that only 10 (26\%) of 39 studies that claim to identify biased behavior presented participants with sufficient information to make this claim in at least one treatment condition. We motivate the value of studying well-defined decision problems by describing a characterization of performance losses they allow to be conceived.
https://dl.acm.org/doi/10.1145/3706598.3714063
News reading helps individuals stay informed about events and developments in society. Local residents and new immigrants often approach the same news differently, prompting the question of how technology, such as LLM-powered chatbots, can best enhance a reader-oriented news experience. The current paper presents an empirical study involving 144 participants from three groups in Virginia, United States: local residents born and raised there (N=48), Chinese immigrants (N=48), and Vietnamese immigrants (N=48). All participants read local housing news with the assistance of the Copilot chatbot. We collected data on each participant's Q&A interactions with the chatbot, along with their takeaways from news reading. While engaging with the news content, participants in both immigrant groups asked the chatbot fewer analytical questions than the local group. They also demonstrated a greater tendency to rely on the chatbot when formulating practical takeaways. These findings offer insights into technology design that aims to serve diverse news readers.
https://dl.acm.org/doi/10.1145/3706598.3714050
The growing sophistication of Large Language Models allows conversational agents (CAs) to engage users in increasingly personalized and targeted conversations. While users may vary in their receptiveness to CA persuasion, stylistic elements and agent personalities can be adjusted on the fly. Combined with image generation models that create context-specific realistic visuals, CAs have the potential to influence user behavior and decision making. We investigate the effects of linguistic and visual elements used by CAs on user perception and decision making in a charitable donation context with an online experiment (n=344). We find that while CA attitude influenced trust, it did not affect donation behavior. Visual primes played no role in shaping trust, though their absence resulted in higher donations and situational empathy. Perceptions of competence and situational empathy were potential predictors of donation amounts. We discuss the complex interplay of user and CA characteristics and the fine line between benign behavior signaling and manipulation.
https://dl.acm.org/doi/10.1145/3706598.3713579