Decision Making with AI

会議の名前
CHI 2025
From Scores to Careers: Understanding AI’s Role in Supporting Collaborative Family Decision-Making in Chinese College Applications
要旨

This study investigates how 18-year-old students, parents, and experts in China utilize artificial intelligence (AI) tools to support decision-making in college applications during college entrance exam- a highly competitive, score-driven, annual national exam. Through 32 interviews, we examine the use of Quark GaoKao, an AI tool that generates college application lists and acceptance probabilities based on exam scores, historical data, preferred locations, etc. Our findings show that AI tools are predominantly used by parents with limited involvement from students, and often focus on immediate exam results, failing to address long-term career goals. We also identify challenges such as misleading AI recommendations, and irresponsible use of AI by third-party consultant agencies. Finally, we offer design insights to better support multi-stakeholders' decision-making in families, especially in the Chinese context, and discuss how emerging AI tools create barriers for families with fewer resources.

著者
Si Chen
University of Illinois at Urbana Champaign , Champaign, Illinois, United States
Jingyi Xie
Pennsylvania State University, University Park, Pennsylvania, United States
Ge Wang
Stanford University, Stanford, California, United States
Haizhou Wang
Pennsylvania State University, University Park, Pennsylvania, United States
Haocong Cheng
University of Illinois Urbana-Champaign, Champaign, Illinois, United States
Yun Huang
University of Illinois at Urbana-Champaign, Champaign, Illinois, United States
DOI

10.1145/3706598.3713341

論文URL

https://dl.acm.org/doi/10.1145/3706598.3713341

動画
The Role of Initial Acceptance Attitudes Toward AI Decisions in Algorithmic Recourse
要旨

Algorithmic recourse provides counterfactual suggestions to individuals who receive unfavorable AI decisions; the aim is to help them understand the reasoning and guide future actions. While most research focuses on generating reasonable and actionable recourse, it often overlooks how individuals' initial reactions to AI decisions influence their perceptions of subsequent recourses and their ultimate acceptance of the decision. To explore this, we conducted a user experiment (N=534) simulating an automobile loan application scenario. Statistical analysis revealed that participants who initially reacted negatively to the AI decision perceived the recourse as less reasonable and actionable, reinforcing their negative attitudes. However, when the recourse was perceived as explaining decision criteria or proposing realistic action plans, participants' attitudes shifted from negative to positive. These findings offer design implications for recourse systems that enhance the acceptance of individuals negatively affected by AI decisions.

著者
Tomu Tominaga
NTT Corporation, Yokosuka, Kanagawa, Japan
Naomi Yamashita
NTT Corporation, Keihanna, Kyoto, Japan
Takeshi Kurashima
NTT Corporation, Yokosuka, Kanagawa, Japan
DOI

10.1145/3706598.3713573

論文URL

https://dl.acm.org/doi/10.1145/3706598.3713573

動画
Hype versus Historical Continuity: Situating the Rise of AI in Climate and Disaster Risk Modeling
要旨

As governments increasingly adopt Artificial Intelligence (AI) across different application sectors, advocates argue that it will create new disruptions by democratizing access, improving accuracy, and lowering costs. In practice, uncritical adoption of AI tools has been shown to cause significant harms. Our study uses a historical lens to examine the uptake of AI in climate risk management through a study of climate and disaster risk modeling. These techniques originated in the insurance industry, but are now incorporated into many climate and disaster governance processes. Using the concept of `insurance logics', we demonstrate that many of the original aspects of disaster risk modeling remain despite the transfer of risk assessment tools from the insurance industry to the public sector and new techniques made possible by AI. This highlights technological continuity, rather than disruption, as a key driver of contemporary risk modeling practice. Doing so helps to unsettle problematic, though challenging to identify, aspects of supposedly disruptive technologies and create possibilities for alternatives.

著者
Shreyasha Paudel
University of Toronto, Toronto, Ontario, Canada
Sabine Loos
University of Michigan, Ann Arbor, Michigan, United States
Robert Soden
University of Toronto, Toronto, Ontario, Canada
DOI

10.1145/3706598.3713985

論文URL

https://dl.acm.org/doi/10.1145/3706598.3713985

動画
Towards Human-AI Deliberation: Design and Evaluation of LLM-Empowered Deliberative AI for AI-Assisted Decision-Making
要旨

Traditional AI-assisted decision-making systems often provide fixed recommendations that users must either accept or reject entirely, limiting meaningful interaction—especially in cases of disagreement. To address this, we introduce Human-AI Deliberation, an approach inspired by human deliberation theories that enables dimension-level opinion elicitation, iterative decision updates, and structured discussions between humans and AI. At the core of this approach is Deliberative AI, an assistant powered by large language models (LLMs) that facilitates flexible, conversational interactions and precise information exchange with domain-specific models. Through a mixed-methods user study, we found that Deliberative AI outperforms traditional explainable AI (XAI) systems by fostering appropriate human reliance and improving task performance. By analyzing participant perceptions, user experience, and open-ended feedback, we highlight key findings, discuss potential concerns, and explore the broader applicability of this approach for future AI-assisted decision-making systems.

受賞
Honorable Mention
著者
Shuai Ma
The Hong Kong University of Science and Technology, Hong Kong, China
Qiaoyi Chen
The HongKong University of Science and Technology, HongKong, China
Xinru Wang
Purdue University, West Lafayette, Indiana, United States
Chengbo Zheng
Hong Kong University of Science and Technology, Hong Kong, Hong Kong
Zhenhui Peng
Sun Yat-sen University, Zhuhai, Guangdong Province, China
Ming Yin
Purdue University, West Lafayette, Indiana, United States
Xiaojuan Ma
Hong Kong University of Science and Technology, Hong Kong, Hong Kong
DOI

10.1145/3706598.3713423

論文URL

https://dl.acm.org/doi/10.1145/3706598.3713423

動画
How a Clinical Decision Support System Changed the Diagnosis Process: Insights from an Experimental Mixed-Method Study in a Full-Scale Anesthesiology Simulation
要旨

Recent advancements in artificial intelligence have sparked discussions on how clinical decision-making can be supported. New clinical decision support systems (CDSSs) have been developed and evaluated through workshops and interviews. However, limited research exists on how CDSSs affect decision-making as it unfolds, particularly in settings such as acute care, where decisions are made collaboratively under time pressure and uncertainty. Using a mixed-method study, we explored the impact of a CDSS on decision-making in anesthetic teams during simulated operating room crises. Fourteen anesthetic teams participated in high-fidelity simulations, half using a CDSS prototype for comparative analysis. Qualitative findings from conversation analysis and quantitative results on decision-making efficiency and workload revealed that the CDSS changed team structure, communication, and diagnostic processes. It homogenized decision-making, empowered nursing staff, and introduced friction between analytical and intuitive thinking. We discuss whether these changes are beneficial or detrimental and offer insights to guide future CDSS design.

受賞
Honorable Mention
著者
Sara Wolf
Julius-Maximilians-Universität Würzburg, Würzburg, Germany
Tobias Grundgeiger
Julius-Maximilians-Universität Würzburg, Würzburg, Germany
Raphael Zähringer
Julius-Maximilians-Universität Würzburg, Würzburg, Germany
Lora Shishkova
Julius-Maximilians-Universität Würzburg, Würzburg, Germany
Franzisca Maas
Julius-Maximilians-Universität Würzburg, Würzburg, Germany
Christina Dilling
Universitätsklinikum Würzburg, Würzburg, Germany
Oliver Happel
Universitätsklinikum Würzburg, Würzburg, Germany
DOI

10.1145/3706598.3713372

論文URL

https://dl.acm.org/doi/10.1145/3706598.3713372

The Amplifying Effect of Explainability in AI-assisted Decision-making in Groups
要旨

In the era of artificial intelligence, AI-assisted decision-making has become a common paradigm. Explainable Artificial Intelligence has been one of the more explored factors in improving transparency of AI tools in AI-assisted decision-making, but sometimes with contradictory results. Furthermore, while individual AI-assisted decision-making has garnered substantial investigation, the domain of group AI-assisted decision-making remains notably underexplored. This research presents the first look at the impact of explainability and team composition on AI-assisted decision-making. With a controlled experiment on mushroom edibility classification, with 89 participants, we show that the impact of XAI is more pronounced in decision-making with groups (2-person) than in individual decision-making. Groups rely less on incorrect AI recommendations when explanations are available, but they rely more on incorrect AI recommendations when explanations are absent, compared to individual decision makers. This phenomenon underscores the amplified effect of explainability in AI-assisted decision-making in group settings.

著者
Regina de Brito Duarte
INESC-ID, Lisbon, Portugal
Mónica C.. Abreu
University of Lisbon, Lisbon, Portugal
Joana Campos
INESC-ID, Lisbon, Portugal
Ana Paiva
INESC-ID, Lisbon, Portugal
DOI

10.1145/3706598.3713534

論文URL

https://dl.acm.org/doi/10.1145/3706598.3713534

動画
AI, Help Me Think—but for Myself: Assisting People in Complex Decision-Making by Providing Different Kinds of Cognitive Support
要旨

How can we design AI tools that effectively support human decision-making by complementing and enhancing users' reasoning processes? Common recommendation-centric approaches face challenges such as inappropriate reliance or a lack of integration with users' decision-making processes. Here, we explore an alternative interaction model in which the AI outputs build upon users' own decision-making rationales. We compare this approach, which we call ExtendAI, with a recommendation-based AI. Participants in our mixed-methods user study interacted with both AIs as part of an investment decision-making task. We found that the AIs had different impacts, with ExtendAI integrating better into the decision-making process and people's own thinking and leading to slightly better outcomes. RecommendAI was able to provide more novel insights while requiring less cognitive effort. We discuss the implications of these and other findings along with three tensions of AI-assisted decision-making which our study revealed.

著者
Leon Reicherts
Microsoft Research, Cambridge, United Kingdom
Zelun Tony Zhang
fortiss GmbH, Munich, Germany
Elisabeth von Oswald
Politecnico di Milano, Milan , Milan, Italy
Yuanting Liu
fortiss GmbH, Munich, Germany
Yvonne Rogers
UCL , London, United Kingdom
Mariam Hassib
HCE, Munich, Germany
DOI

10.1145/3706598.3713295

論文URL

https://dl.acm.org/doi/10.1145/3706598.3713295

動画