Technologies for Decision Making

会議の名前
CHI 2025
Beyond Time and Accuracy: Strategies in Visual Problem-Solving
要旨

In this paper, we explore viewers’ strategies in visual problem-solving tasks. We build on the traditional metrics of accuracy and time to better understand the learning that occurs as individuals interact with visualizations. We conducted an in-lab eye-tracking user study with 53 participants from diverse demographic backgrounds. Using questions from the Visualization Literacy Assessment Test (VLAT), we examined participants’ problem-solving strategies. We employed a mixed-methods approach capturing quantitative data on performance and gaze patterns, as well as qualitative data through think-alouds and sketches by participants as they reported on their problem-solving approach. Our analysis reveals not only the various cognitive strategies leading to correct answers but also the nature of mistakes and the conceptual misunderstandings that underlie them. This research contributes to the enhancement of visualization design guidelines by incorporating insights into the diverse strategies and cognitive processes employed by users.

著者
Eric Mörth
Harvard Medical School, Boston, Massachusetts, United States
Zona Kostic
Harvard University, Boston, Massachusetts, United States
Nils Gehlenborg
Harvard Medical School, Boston, Massachusetts, United States
Hanspeter Pfister
Harvard University, Cambridge, Massachusetts, United States
Johanna Beyer
Harvard University, Cambridge, Massachusetts, United States
Carolina Nobre
University of Toronto, Toronto, Ontario, Canada
DOI

10.1145/3706598.3714024

論文URL

https://dl.acm.org/doi/10.1145/3706598.3714024

動画
Towards Effective Human Intervention in Algorithmic Decision-Making: Understanding the Effect of Decision-Makers' Configuration on Decision-Subjects' Fairness Perceptions
要旨

Human intervention is claimed to safeguard decision-subjects' rights in algorithmic decision-making and contribute to their fairness perceptions. However, how decision subjects perceive hybrid decision-maker configurations (i.e., combining humans and algorithms) is unclear. We address this gap through a mixed-methods study in an algorithmic policy enforcement context. Through qualitative interviews (Study 1; N_1=21), we identify three characteristics (i.e., decision-maker's profile, model type, input data provenance) that affect how decision-subjects perceive decision-makers' ability, benevolence, and integrity (ABI). Through a quantitative study (Study 2; N_2=223), we then systematically evaluate the individual and combined effects of these characteristics on decision-subjects' perceptions towards decision-makers, and fairness perceptions. We found that only decision-maker’s profile contributes to perceived ability, benevolence, and integrity. Interestingly, the effect of decision-maker's profile on fairness perceptions was mediated by perceived ability and integrity. Our findings have design implications for ensuring effective human intervention as a protection against harmful algorithmic decisions.

著者
Mireia Yurrita
Delft University of Technology, Delft, Netherlands
Himanshu Verma
TU Delft, Delft, Netherlands
Agathe Balayn
Delft University of Technology, Delft, Netherlands
Ujwal Gadiraju
Delft University of Technology, Delft, Netherlands
Sylvia C.. Pont
Delft University of Technology, Delft, Netherlands
Alessandro Bozzon
Delft University of Technology, Delft, Netherlands
DOI

10.1145/3706598.3713145

論文URL

https://dl.acm.org/doi/10.1145/3706598.3713145

動画
Supporting Contraceptive Decision-Making in the Intermediated Pharmacy Setting in Kenya
要旨

Adolescent girls and young women (AGYW) in sub-Saharan Africa face unique barriers to contraceptive access and lack AGYW-centered contraceptive decision-support resources. To empower AGYW to make informed choices and improve reproductive health outcomes, we developed a tablet-based application to provide contraceptive education and decision-making support in the pharmacy setting - a key source of contraceptive services for AGYW - in Kenya. We conducted workshops with AGYW and pharmacy providers in Kenya to gather app feedback and understand how to integrate the intervention into the pharmacy setting. Our analysis highlights how intermediated interactions - a multiuser, cooperative effort to enable technology use and information access - could inform a successful contraceptive intervention in Kenya. The potential strengths of intermediation in our setting inform implications for technological health interventions in intermediated scenarios in low- and middle-income countries, including challenges and opportunities for extending impact to different populations and integrating technology into resource-constrained healthcare settings.

著者
Lisa Orii
University of Washington, Seattle, Washington, United States
Elizabeth K. Harrington
University of Washington, Seattle, Washington, United States
Serah Gitome
Kenya Medical Research Institute, Nairobi, Kenya
Nelson Kiprotich. Cheruiyot
Independent design consultant, Nairobi, Kenya
Elizabeth Anne. Bukusi
Kenya Medical Research Institute , Nairobi, Nairobi Municipality, Kenya
Sandy Cheng
University of Washington, Seattle, Washington, United States
Ariel Fu
University of Washington , Seattle, Washington, United States
Khushi Khandelwal
University of Washington, Seattle, Washington, United States
Shrimayee Narasimhan
University of Washington , Seattle, Washington, United States
Richard Anderson
University of Washington, Seattle, Washington, United States
DOI

10.1145/3706598.3713508

論文URL

https://dl.acm.org/doi/10.1145/3706598.3713508

動画
Contrastive Explanations That Anticipate Human Misconceptions Can Improve Human Decision-Making Skills
要旨

People's decision-making abilities often fail to improve or may even erode when they rely on AI for decision support, even when the AI provides informative explanations. We argue this is partly because people intuitively seek contrastive explanations, which clarify the difference between the AI's decision and their own reasoning, while most AI systems offer "unilateral" explanations that justify the AI’s decision but do not account for users' knowledge and thinking. To address potential human knowledge gaps, we introduce a framework for generating human-centered contrastive explanations that explain the difference between AI's choice and a predicted, likely human choice about the same task. Results from a large-scale experiment (N = 628) demonstrate that contrastive explanations significantly enhance users' independent decision-making skills compared to unilateral explanations, without sacrificing decision accuracy. As concerns about deskilling in AI-supported tasks grow, our research demonstrates that integrating human reasoning into AI design can promote human skill development.

受賞
Honorable Mention
著者
Zana Buçinca
Harvard University, Cambridge, Massachusetts, United States
Siddharth Swaroop
Harvard University, Cambridge, Massachusetts, United States
Amanda E.. Paluch
University of Massachusetts Amherst, Amherst, Massachusetts, United States
Finale Doshi-Velez
Harvard University, Cambridge, Massachusetts, United States
Krzysztof Z.. Gajos
Harvard University, Allston, Massachusetts, United States
DOI

10.1145/3706598.3713229

論文URL

https://dl.acm.org/doi/10.1145/3706598.3713229

動画
Advancing Problem-Based Learning with Clinical Reasoning for Improved Differential Diagnosis in Medical Education
要旨

Medical education increasingly emphasizes students' ability to apply knowledge in real-world clinical settings, focusing on evidence-based clinical reasoning and differential diagnoses. Problem-based learning (PBL) addresses traditional teaching limitations by embedding learning into meaningful contexts and promoting active participation. However, current PBL practices are often confined to medical instructional settings, limiting students' ability to self-direct and refine their approaches based on targeted improvements. Additionally, the unstructured nature of information organization during analysis poses challenges for record-keeping and subsequent review. Existing research enhances PBL realism and immersion but overlooks the construction of logic chains and evidence-based reasoning. To address these gaps, we designed e-MedLearn, a learner-centered PBL system that supports more efficient application and practice of evidence-based clinical reasoning. Through controlled study (N=19) and testing interviews (N=13), we gathered data to assess the system's impact. The findings demonstrate that e-MedLearn improves PBL experiences and provides valuable insights for advancing clinical reasoning-based learning.

著者
Yuansong Xu
ShanghaiTech University, Shanghai, China
Yuheng Shao
ShanghaiTech University, Shanghai, China
Jiahe Dong
ShanghaiTech University, Shanghai, China
Shaohan Shi
ShanghaiTech University, Shanghai, Shanghai, China
Chang Jiang
Shanghai Clinical Research and Trial Center, Shanghai, China
Quan Li
ShanghaiTech University, Shanghai, Shanghai, China
DOI

10.1145/3706598.3713772

論文URL

https://dl.acm.org/doi/10.1145/3706598.3713772

動画
Selective Trust: Understanding Human-AI Partnerships in Personal Health Decision-Making Process
要旨

As artificial intelligence (AI) becomes more embedded in personal health technology, its potential to transform health decision-making through personalised recommendations is becoming significant. However, there is limited understanding of how individuals perceive AI-assisted decision-making in the context of personal health. This study investigates the impact of AI-assisted decision-making on trust in physical activity-related health decisions. By employing MoveAI, a GPT-4.0-based physical activity decision-making tool, we conducted a mixed-methods study and conducted an online survey (N=184) and semi-structured interviews (N=24) to explore this dynamic. Our findings emphasise the role of nuanced personal health recommendations and individual decision-making styles in shaping trust in AI-assisted personal health decision-making. This paper contributes to the HCI literature by elucidating the relationship between decision-making styles and trust in the AI-assisted personal health decision-making process and showing the challenges of aligning AI recommendations with individual decision-making preferences.

著者
Sterre van Arum
University of Twente, Enschede, Netherlands
Hüseyin Uğur Genç
TU Delft, Delft, Netherlands
Dennis Reidsma
University of Twente, Enschede, Netherlands
Armağan Karahanoğlu
University of Twente, Enschede, Netherlands
DOI

10.1145/3706598.3713462

論文URL

https://dl.acm.org/doi/10.1145/3706598.3713462

動画