Trust and Responsibility in AI

会議の名前
CHI 2025
Bridging AI and Humanitarianism: An HCI-Informed Framework for Responsible AI Adoption
要旨

Advances in artificial intelligence (AI) hold transformative potential for humanitarian practice. Yet aligning this potential with the demands of humanitarian practice in dynamic and often resource-austere contexts remains a challenge. While research on Responsible AI provides high-level guidance, humanitarian practice demands nuanced approaches for which human-computer interaction (HCI) can provide a strong foundation. However, existing literature lacks a comprehensive examination of how HCI principles can inform responsible AI adoption in humanitarian practice. To address this gap, we conducted a reflexive thematic analysis of 34 interviews with AI technology experts, humanitarian practitioners, and humanitarian policy developers. Our contributions are twofold. First, we empirically identify three cross-cutting themes—AI risks in humanitarian practice, organisational readiness, and collaboration—that highlight common tensions in adopting AI for humanitarian practice. Second, by analysing their interconnectivities, we reveal intertwined obstacles and propose a conceptual HCI-informed framework.

著者
Tigmanshu Bhatnagar
University College London, London, United Kingdom
Maarya Omar
University College London, London, United Kingdom
Davor Orlic
Jožef Stefan Institute, Ljubljana, Slovenia
James Smith
University College London, London, United Kingdom
Catherine Holloway
University College London, London, United Kingdom
Maria Kett
University College London, London, United Kingdom
DOI

10.1145/3706598.3713184

論文URL

https://dl.acm.org/doi/10.1145/3706598.3713184

動画
Good Performance Isn't Enough to Trust AI: Lessons from Logistics Experts on their Long-Term Collaboration with an AI Planning System
要旨

While research on trust in human-AI interactions is gaining recognition, much work is conducted in lab settings that, therefore, lack ecological validity and often omit the trust development perspective. We investigated a real-world case in which logistics experts had worked with an AI system for several years (in some cases since its introduction). Through thematic analysis, three key themes emerged: First, although experts clearly point out AI system imperfections, they still showed to develop trust over time. Second, however, inconsistencies and frequent efforts to improve the AI system disrupted trust development, hindering control, transparency, and understanding of the system. Finally, despite the overall trustworthiness, experts overrode correct AI decisions to protect their colleagues’ well-being. By comparing our results with the latest trust research, we can confirm empirical work and contribute new perspectives, such as understanding the importance of human elements for trust development in human-AI scenarios.

著者
Patricia K.. Kahr
Eindhoven University of Technology, Eindhoven, Netherlands
Gerrit Rooks
Eindhoven University of Technology, Eindhoven, Netherlands
Chris Snijders
Eindhoven University of Technoloy, Eindhoven, Netherlands
Martijn C.. Willemsen
Jheronimus Academy of Data Science, Den Bosch, Netherlands
DOI

10.1145/3706598.3713099

論文URL

https://dl.acm.org/doi/10.1145/3706598.3713099

動画
Unpacking Trust Dynamics in the LLM Supply Chain: An Empirical Exploration to Foster Trustworthy LLM Production & Use
要旨

Research on trust in AI is limited to several trustors (e.g., end-users) and trustees (especially AI systems), and empirical explorations remain in laboratory settings, overlooking factors that impact trust relations in the real world. Here, we broaden the scope of research by accounting for the supply chains that AI systems are part of. To this end, we present insights from an in-situ, empirical, study of LLM supply chains. We conducted interviews with 71 practitioners, and analyzed their (collaborative) practices using the lens of trust drawing from literature in organizational psychology. Our work reveals complex trust dynamics at the junctions of the chains, with interactions between diverse technical artifacts, individuals, or organizations. These junctions might constitute terrain for uncalibrated reliance when trustors lack supply chain knowledge or power dynamics are at play. Our findings bear implications for AI researchers and policymakers to promote AI governance that fosters calibrated trust.

受賞
Honorable Mention
著者
Agathe Balayn
Delft University of Technology, Delft, Netherlands
Mireia Yurrita
Delft University of Technology, Delft, Netherlands
Fanny Rancourt
ServiceNow, Montreal, Quebec, Canada
Fabio Casati
University of Trento, Trento, Italy
Ujwal Gadiraju
Delft University of Technology, Delft, Netherlands
DOI

10.1145/3706598.3713787

論文URL

https://dl.acm.org/doi/10.1145/3706598.3713787

動画
Bridging the Trust Gap: Investigating the Role of Trust Transfer in the Adoption of AI Instructors for Digital Privacy Education
要旨

Recent studies have demonstrated how AI instructors can be used for digital privacy education. However, these studies also highlights the lack of trust that certain individuals–particularly older adults–have in such AI instructors as a major obstacle to their adoption. The current paper introduces "trust transfer" as a means to enhance appropriate trust in AI instructors and improve learning experiences. A between-subjects experiment (N = 217) was conducted to test the effect of a human introducing an AI instructor on users' trust and learning experiences. Our findings reveal that this trust transfer positively impacts the perceived trustworthiness of the instructor, as well as users' perception of learning and their enjoyment of the educational material, regardless of age. Based on our findings, we discuss how trust transfer can help calibrate users' trust in AI instructors, thereby fostering AI use in digital privacy education, with potential extensions to other domains.

著者
Heba Aly
Clemson University, Clemson, South Carolina, United States
Matias Volonte
Clemson University, Charleston, South Carolina, United States
Kaileigh Angela Byrne
Clemson University, Clemson, South Carolina, United States
Bart Piet Knijnenburg
Clemson University, Clemson, South Carolina, United States
DOI

10.1145/3706598.3713570

論文URL

https://dl.acm.org/doi/10.1145/3706598.3713570

動画
Trusting Autonomous Teammates in Human-AI Teams - A Literature Review
要旨

As autonomous AI agents become increasingly integrated into human teams, the level of trust humans place in these agents - both as a piece of technology and increasingly viewed as teammates - significantly impacts the success of human-AI teams (HATs). This work presents a literature review of the HAT research that investigates humans' trust in their AI teammates. In this review, we first identify the ways in which trust was conceptualized and operationalized, which underscores the pressing need for clear definitions and consistent measurements. Then, we categorize and quantify the factors found to influence trust in an AI teammate, highlighting that agent-related factors (such as transparency, reliability) have the strongest impacts on trust in HAT research. We also identify under-explored factors related to humans, teams, and environments, and gaps for future HAT research and design.

著者
Wen Duan
Clemson University, Clemson, South Carolina, United States
Christopher Flathmann
Clemson University, Clemson , South Carolina, United States
Nathan McNeese
Clemson University , Clemson, South Carolina, United States
Matthew J. Scalia
Arizona State University, Mesa, Arizona, United States
Ruihao Zhang
Arizona State University, Mesa, Arizona, United States
Jamie Gorman
Arizona State University, Tempe, Arizona, United States
Guo Freeman
Clemson University, Clemson, South Carolina, United States
Shiwen Zhou
Arizona State University, Mesa, Arizona, United States
Allyson Ivy. Hauptman
Clemson University, Clemson, South Carolina, United States
Xiaoyun Yin
Arizona State University , Gilbert, Arizona, United States
DOI

10.1145/3706598.3713527

論文URL

https://dl.acm.org/doi/10.1145/3706598.3713527

動画
Robots, Chatbots, Self-Driving Cars: Perceptions of Mind and Morality Across Artificial Intelligences
要旨

AI systems have rapidly advanced, diversified, and proliferated, but our knowledge of people’s perceptions of mind and morality in them is limited, despite its importance for outcomes such as whether people trust AIs and how they assign responsibility for AI-caused harms. In a preregistered online study, 975 participants rated 26 AI and non-AI entities. Overall, AIs were perceived to have low-to-moderate agency (e.g., planning, acting), between inanimate objects and ants, and low experience (e.g., sensing, feeling). For example, ChatGPT was rated only as capable of feeling pleasure and pain as a rock. The analogous moral faculties, moral agency (doing right or wrong) and moral patiency (being treated rightly or wrongly) were higher and more varied, particularly moral agency: The highest-rated AI, a Tesla Full Self-Driving car, was rated as morally responsible for harm as a chimpanzee. We discuss how design choices can help manage perceptions, particularly in high-stakes moral contexts.

著者
Ali Ladak
University of Edinburgh, Edinburgh, Scotland, United Kingdom
Matti Wilks
University of Edinburgh, Edinburgh, United Kingdom
Steve Loughnan
University of Edinburgh, Edinburgh, United Kingdom
Jacy Reese. Anthis
University of Chicago, Chicago, Illinois, United States
DOI

10.1145/3706598.3713130

論文URL

https://dl.acm.org/doi/10.1145/3706598.3713130

動画
"It’s Not the AI’s Fault Because It Relies Purely on Data": How Causal Attributions of AI Decisions Shape Trust in AI Systems
要旨

Humans naturally seek to identify causes behind outcomes through causal attribution, yet Human-AI research often overlooks how users perceive causality behind AI decisions. We examine how this perceived locus of causality—internal or external to the AI—influences trust, and how decision stakes and outcome favourability moderate this relationship. Participants (N=192) engaged with AI-based decision-making scenarios operationalising varying loci of causality, stakes, and favourability, evaluating their trust in each AI. We find that internal attributions foster lower trust as participants perceive the AI to have high autonomy and decision-making responsibility. Conversely, external attributions portray the AI as merely "a tool" processing data, reducing its perceived agency and distributing responsibility, thereby boosting trust. Moreover, stakes moderate this relationship—external attributions foster even more trust in lower-risk, low-stakes scenarios. Our findings establish causal attribution as a crucial yet underexplored determinant of trust in AI, highlighting the importance of accounting for it when researching trust dynamics.

受賞
Honorable Mention
著者
Saumya Pareek
University of Melbourne, Melbourne, Victoria, Australia
Sarah Schömbs
The University of Melbourne, Melbourne, VIC, Australia
Eduardo Velloso
University of Sydney, Sydney, New South Wales, Australia
Jorge Goncalves
University of Melbourne, Melbourne, Australia
DOI

10.1145/3706598.3713468

論文URL

https://dl.acm.org/doi/10.1145/3706598.3713468

動画