Decision Making and Analysis

会議の名前
CHI 2025
BIASsist: Empowering News Readers via Bias Identification, Explanation, and Neutralization
要旨

Biased news articles can distort readers' perceptions by presenting information in a way that favors or disfavors a particular point of view. Subtly embedded in the text, these biased news articles can shape our views daily without people even realizing it. To address this issue, we propose BIASsist, an LLM-based approach designed to mitigate bias in news articles. Based on existing research, we defined six types of bias and introduced three assistive components—identification, explanation, and neutralization—to provide a broader range of bias information and enhance readers' bias-awareness. We conducted a mixed-method study with 36 participants to evaluate the effectiveness of BIASsist. The results show participants' bias awareness significantly improved and their interest in identifying bias increased. Participants also tended to engage more actively in critically evaluating articles. Based on these findings, we discuss its potential to improve media literacy and critical thinking in today's information overload era.

著者
Yeo-Gyeong Noh
Gwangju Institute of Science and Technology, Gwangju, Korea, Republic of
MinJu Han
Gwangju Institute of Science and Technology, Gwangju, Korea, Republic of
Junryeol Jeon
Gwangju Institute of Science and Technology, Gwangju, Korea, Republic of
Jin-Hyuk Hong
Gwangju Institute of Science and Technology, Gwangju, Korea, Republic of
DOI

10.1145/3706598.3713531

論文URL

https://dl.acm.org/doi/10.1145/3706598.3713531

動画
More Forecasts, More (Decision) Problems: How Uncertainty Representations for Multiple Forecasts Impact Decision Making
要旨

Users often have access to multiple forecasts regarding an event. Different forecasts incorporate different assumptions and epistemic information. A growing body of work argues against decision-making solely based on expected utility maximisation strategies in multiple forecasts scenarios, in favour of other strategies such as the maximin expected utility. In this work, we compare two different approaches for depicting epistemic uncertainty—ensembles (a direct representation of multiple forecasts) and p-boxes (a representation which only communicates the bounds of epistemic uncertainty)—in plots where individual distributions are represented as cumulative distribution plots (CDFs). We conduct three experiments to investigate the impact of the visual representation on the decision-making strategies that people adopt. Our results suggest that participants adopt conservative decision-making strategies (i.e. place greater weight on the worst-case forecast than the best-case forecast) for both p-boxes and ensembles if the set of forecasts are uniformly distributed. However, if a majority of the forecasts are clustered near one of the bounds, participants may discount the forecast which appears as a visual outlier.

著者
Abhraneel Sarma
Northwestern University, Evanston, Illinois, United States
Maryam Hedayati
Northwestern University, Evanston, Illinois, United States
Matthew Kay
Northwestern University, Chicago, Illinois, United States
DOI

10.1145/3706598.3713725

論文URL

https://dl.acm.org/doi/10.1145/3706598.3713725

動画
Generative AI and News Consumption: Design Fictions and Critical Analysis
要旨

The emergence of Generative AI features in news applications may radically change news consumption and challenge journalistic practices. To explore the future potentials and risks of this understudied area, we created six design fictions depicting scenarios such as virtual companions delivering news summaries to the user, AI providing context to news topics, and content being transformed into other formats on demand. The fictions, discussed with a multi-disciplinary group of experts, enabled a critical examination of the diverse ethical, societal, and journalistic implications of AI shaping this everyday activity. The discussions raised several concerns, suggesting that such consumer-oriented AI applications can clash with journalistic values and processes. These include fears that neither consumers nor AI could successfully balance engagement, objectivity, and truth, leading to growing detachment from shared understanding. We offer critical insights into the potential long-term effects to guide design efforts in this emerging application area of GenAI.

著者
Joel Kiskola
Tampere University, Tampere, Finland
Henrik Rydenfelt
University of Helsinki, Helsinki, Finland
Thomas Olsson
Tampere University, Tampere, Finland
Lauri Haapanen
University of Jyväskylä, Jyväskylä, Finland
Noora Vänttinen
Aalto University, Helsinki, Finland
Matti Nelimarkka
University of Helsinki, Helsinki, Finland
Minna Vigren
Lappeenranta-Lahti University of Technology, Lappeenranta, Finland
Salla-Maaria Laaksonen
University of Helsinki, Helsinki, Finland
Tuukka Lehtiniemi
University of Helsinki, Helsinki, Finland
DOI

10.1145/3706598.3713804

論文URL

https://dl.acm.org/doi/10.1145/3706598.3713804

動画
Underspecified Human Decision Experiments Considered Harmful
要旨

Decision-making with information displays is a key focus of research in areas like human-AI collaboration and data visualization. However, what constitutes a decision problem, and what is required for an experiment to conclude that decisions are flawed, remain imprecise. We present a widely applicable definition of a decision problem synthesized from statistical decision theory and information economics. We claim that to attribute loss in human performance to bias, an experiment must provide the information that a rational agent would need to identify the normative decision. We evaluate whether recent empirical research on AI-assisted decisions achieves this standard. We find that only 10 (26\%) of 39 studies that claim to identify biased behavior presented participants with sufficient information to make this claim in at least one treatment condition. We motivate the value of studying well-defined decision problems by describing a characterization of performance losses they allow to be conceived.

著者
Jessica Hullman
Northwestern University, Evanston, Illinois, United States
Alex Kale
University of Chicago, Chicago, Illinois, United States
Jason Hartline
Northwestern U, Evanston, Illinois, United States
DOI

10.1145/3706598.3714063

論文URL

https://dl.acm.org/doi/10.1145/3706598.3714063

動画
The News Says, the Bot Says: How Immigrants and Locals Differ in Chatbot-Facilitated News Reading
要旨

News reading helps individuals stay informed about events and developments in society. Local residents and new immigrants often approach the same news differently, prompting the question of how technology, such as LLM-powered chatbots, can best enhance a reader-oriented news experience. The current paper presents an empirical study involving 144 participants from three groups in Virginia, United States: local residents born and raised there (N=48), Chinese immigrants (N=48), and Vietnamese immigrants (N=48). All participants read local housing news with the assistance of the Copilot chatbot. We collected data on each participant's Q&A interactions with the chatbot, along with their takeaways from news reading. While engaging with the news content, participants in both immigrant groups asked the chatbot fewer analytical questions than the local group. They also demonstrated a greater tendency to rely on the chatbot when formulating practical takeaways. These findings offer insights into technology design that aims to serve diverse news readers.

著者
Yongle Zhang
University of Maryland, College Park, Maryland, United States
Phuong-Anh Nguyen-Le
University of Maryland, College Park, College Park, Maryland, United States
Kriti Singh
University of Maryland, College Park, University Park, Maryland, United States
Ge Gao
University of Maryland, College Park, Maryland, United States
DOI

10.1145/3706598.3714050

論文URL

https://dl.acm.org/doi/10.1145/3706598.3714050

動画
Persuasion in Pixels and Prose: The Effects of Emotional Language and Visuals in Agent Conversations on Decision-Making
要旨

The growing sophistication of Large Language Models allows conversational agents (CAs) to engage users in increasingly personalized and targeted conversations. While users may vary in their receptiveness to CA persuasion, stylistic elements and agent personalities can be adjusted on the fly. Combined with image generation models that create context-specific realistic visuals, CAs have the potential to influence user behavior and decision making. We investigate the effects of linguistic and visual elements used by CAs on user perception and decision making in a charitable donation context with an online experiment (n=344). We find that while CA attitude influenced trust, it did not affect donation behavior. Visual primes played no role in shaping trust, though their absence resulted in higher donations and situational empathy. Perceptions of competence and situational empathy were potential predictors of donation amounts. We discuss the complex interplay of user and CA characteristics and the fine line between benign behavior signaling and manipulation.

著者
Hüseyin Uğur Genç
TU Delft, Delft, Netherlands
Senthil Chandrasegaran
TU Delft, Delft, Netherlands
Tilman Dingler
Delft University of Technology, Delft, Netherlands
Himanshu Verma
TU Delft, Delft, Netherlands
DOI

10.1145/3706598.3713579

論文URL

https://dl.acm.org/doi/10.1145/3706598.3713579

動画