From Scores to Careers: Understanding AI’s Role in Supporting Collaborative Family Decision-Making in Chinese College Applications
説明

This study investigates how 18-year-old students, parents, and experts in China utilize artificial intelligence (AI) tools to support decision-making in college applications during college entrance exam- a highly competitive, score-driven, annual national exam. Through 32 interviews, we examine the use of Quark GaoKao, an AI tool that generates college application lists and acceptance probabilities based on exam scores, historical data, preferred locations, etc. Our findings show that AI tools are predominantly used by parents with limited involvement from students, and often focus on immediate exam results, failing to address long-term career goals. We also identify challenges such as misleading AI recommendations, and irresponsible use of AI by third-party consultant agencies. Finally, we offer design insights to better support multi-stakeholders' decision-making in families, especially in the Chinese context, and discuss how emerging AI tools create barriers for families with fewer resources.

日本語まとめ
読み込み中…
読み込み中…
The Role of Initial Acceptance Attitudes Toward AI Decisions in Algorithmic Recourse
説明

Algorithmic recourse provides counterfactual suggestions to individuals who receive unfavorable AI decisions; the aim is to help them understand the reasoning and guide future actions.

While most research focuses on generating reasonable and actionable recourse, it often overlooks how individuals' initial reactions to AI decisions influence their perceptions of subsequent recourses and their ultimate acceptance of the decision.

To explore this, we conducted a user experiment (N=534) simulating an automobile loan application scenario.

Statistical analysis revealed that participants who initially reacted negatively to the AI decision perceived the recourse as less reasonable and actionable, reinforcing their negative attitudes.

However, when the recourse was perceived as explaining decision criteria or proposing realistic action plans, participants' attitudes shifted from negative to positive.

These findings offer design implications for recourse systems that enhance the acceptance of individuals negatively affected by AI decisions.

日本語まとめ
読み込み中…
読み込み中…
Hype versus Historical Continuity: Situating the Rise of AI in Climate and Disaster Risk Modeling
説明

As governments increasingly adopt Artificial Intelligence (AI) across different application sectors, advocates argue that it will create new disruptions by democratizing access, improving accuracy, and lowering costs. In practice, uncritical adoption of AI tools has been shown to cause significant harms. Our study uses a historical lens to examine the uptake of AI in climate risk management through a study of climate and disaster risk modeling. These techniques originated in the insurance industry, but are now incorporated into many climate and disaster governance processes. Using the concept of `insurance logics', we demonstrate that many of the original aspects of disaster risk modeling remain despite the transfer of risk assessment tools from the insurance industry to the public sector and new techniques made possible by AI. This highlights technological continuity, rather than disruption, as a key driver of contemporary risk modeling practice. Doing so helps to unsettle problematic, though challenging to identify, aspects of supposedly disruptive technologies and create possibilities for alternatives.

日本語まとめ
読み込み中…
読み込み中…
Towards Human-AI Deliberation: Design and Evaluation of LLM-Empowered Deliberative AI for AI-Assisted Decision-Making
説明

Traditional AI-assisted decision-making systems often provide fixed recommendations that users must either accept or reject entirely, limiting meaningful interaction—especially in cases of disagreement. To address this, we introduce Human-AI Deliberation, an approach inspired by human deliberation theories that enables dimension-level opinion elicitation, iterative decision updates, and structured discussions between humans and AI. At the core of this approach is Deliberative AI, an assistant powered by large language models (LLMs) that facilitates flexible, conversational interactions and precise information exchange with domain-specific models. Through a mixed-methods user study, we found that Deliberative AI outperforms traditional explainable AI (XAI) systems by fostering appropriate human reliance and improving task performance. By analyzing participant perceptions, user experience, and open-ended feedback, we highlight key findings, discuss potential concerns, and explore the broader applicability of this approach for future AI-assisted decision-making systems.

日本語まとめ
読み込み中…
読み込み中…
How a Clinical Decision Support System Changed the Diagnosis Process: Insights from an Experimental Mixed-Method Study in a Full-Scale Anesthesiology Simulation
説明

Recent advancements in artificial intelligence have sparked discussions on how clinical decision-making can be supported. New clinical decision support systems (CDSSs) have been developed and evaluated through workshops and interviews. However, limited research exists on how CDSSs affect decision-making as it unfolds, particularly in settings such as acute care, where decisions are made collaboratively under time pressure and uncertainty. Using a mixed-method study, we explored the impact of a CDSS on decision-making in anesthetic teams during simulated operating room crises. Fourteen anesthetic teams participated in high-fidelity simulations, half using a CDSS prototype for comparative analysis. Qualitative findings from conversation analysis and quantitative results on decision-making efficiency and workload revealed that the CDSS changed team structure, communication, and diagnostic processes. It homogenized decision-making, empowered nursing staff, and introduced friction between analytical and intuitive thinking. We discuss whether these changes are beneficial or detrimental and offer insights to guide future CDSS design.

日本語まとめ
読み込み中…
読み込み中…
The Amplifying Effect of Explainability in AI-assisted Decision-making in Groups
説明

In the era of artificial intelligence, AI-assisted decision-making has become a common paradigm. Explainable Artificial Intelligence has been one of the more explored factors in improving transparency of AI tools in AI-assisted decision-making, but sometimes with contradictory results.

Furthermore, while individual AI-assisted decision-making has garnered substantial investigation, the domain of group AI-assisted decision-making remains notably underexplored. This research presents the first look at the impact of explainability and team composition on AI-assisted decision-making. With a controlled experiment on mushroom edibility classification, with 89 participants, we show that the impact of XAI is more pronounced in decision-making with groups (2-person) than in individual decision-making.

Groups rely less on incorrect AI recommendations when explanations are available, but they rely more on incorrect AI recommendations when explanations are absent, compared to individual decision makers.

This phenomenon underscores the amplified effect of explainability in AI-assisted decision-making in group settings.

日本語まとめ
読み込み中…
読み込み中…
AI, Help Me Think—but for Myself: Assisting People in Complex Decision-Making by Providing Different Kinds of Cognitive Support
説明

How can we design AI tools that effectively support human decision-making by complementing and enhancing users' reasoning processes? Common recommendation-centric approaches face challenges such as inappropriate reliance or a lack of integration with users' decision-making processes. Here, we explore an alternative interaction model in which the AI outputs build upon users' own decision-making rationales. We compare this approach, which we call ExtendAI, with a recommendation-based AI. Participants in our mixed-methods user study interacted with both AIs as part of an investment decision-making task. We found that the AIs had different impacts, with ExtendAI integrating better into the decision-making process and people's own thinking and leading to slightly better outcomes. RecommendAI was able to provide more novel insights while requiring less cognitive effort. We discuss the implications of these and other findings along with three tensions of AI-assisted decision-making which our study revealed.

日本語まとめ
読み込み中…
読み込み中…