Bridging AI and Humanitarianism: An HCI-Informed Framework for Responsible AI Adoption
説明

Advances in artificial intelligence (AI) hold transformative potential for humanitarian practice. Yet aligning this potential with the demands of humanitarian practice in dynamic and often resource-austere contexts remains a challenge. While research on Responsible AI provides high-level guidance, humanitarian practice demands nuanced approaches for which human-computer interaction (HCI) can provide a strong foundation. However, existing literature lacks a comprehensive examination of how HCI principles can inform responsible AI adoption in humanitarian practice. To address this gap, we conducted a reflexive thematic analysis of 34 interviews with AI technology experts, humanitarian practitioners, and humanitarian policy developers. Our contributions are twofold. First, we empirically identify three cross-cutting themes—AI risks in humanitarian practice, organisational readiness, and collaboration—that highlight common tensions in adopting AI for humanitarian practice. Second, by analysing their interconnectivities, we reveal intertwined obstacles and propose a conceptual HCI-informed framework.

日本語まとめ
読み込み中…
読み込み中…
Good Performance Isn't Enough to Trust AI: Lessons from Logistics Experts on their Long-Term Collaboration with an AI Planning System
説明

While research on trust in human-AI interactions is gaining recognition, much work is conducted in lab settings that, therefore, lack ecological validity and often omit the trust development perspective. We investigated a real-world case in which logistics experts had worked with an AI system for several years (in some cases since its introduction). Through thematic analysis, three key themes emerged: First, although experts clearly point out AI system imperfections, they still showed to develop trust over time. Second, however, inconsistencies and frequent efforts to improve the AI system disrupted trust development, hindering control, transparency, and understanding of the system. Finally, despite the overall trustworthiness, experts overrode correct AI decisions to protect their colleagues’ well-being. By comparing our results with the latest trust research, we can confirm empirical work and contribute new perspectives, such as understanding the importance of human elements for trust development in human-AI scenarios.

日本語まとめ
読み込み中…
読み込み中…
Unpacking Trust Dynamics in the LLM Supply Chain: An Empirical Exploration to Foster Trustworthy LLM Production & Use
説明

Research on trust in AI is limited to several trustors (e.g., end-users) and trustees (especially AI systems), and empirical explorations remain in laboratory settings, overlooking factors that impact trust relations in the real world. Here, we broaden the scope of research by accounting for the supply chains that AI systems are part of. To this end, we present insights from an in-situ, empirical, study of LLM supply chains. We conducted interviews with 71 practitioners, and analyzed their (collaborative) practices using the lens of trust drawing from literature in organizational psychology.

Our work reveals complex trust dynamics at the junctions of the chains, with interactions between diverse technical artifacts, individuals, or organizations. These junctions might constitute terrain for uncalibrated reliance when trustors lack supply chain knowledge or power dynamics are at play. Our findings bear implications for AI researchers and policymakers to promote AI governance that fosters calibrated trust.

日本語まとめ
読み込み中…
読み込み中…
Bridging the Trust Gap: Investigating the Role of Trust Transfer in the Adoption of AI Instructors for Digital Privacy Education
説明

Recent studies have demonstrated how AI instructors can be used for digital privacy education. However, these studies also highlights the lack of trust that certain individuals–particularly older adults–have in such AI instructors as a major obstacle to their adoption. The current paper introduces "trust transfer" as a means to enhance appropriate trust in AI instructors and improve learning experiences.

A between-subjects experiment (N = 217) was conducted to test the effect of a human introducing an AI instructor on users' trust and learning experiences. Our findings reveal that this trust transfer positively impacts the perceived trustworthiness of the instructor, as well as users' perception of learning and their enjoyment of the educational material, regardless of age.

Based on our findings, we discuss how trust transfer can help calibrate users' trust in AI instructors, thereby fostering AI use in digital privacy education, with potential extensions to other domains.

日本語まとめ
読み込み中…
読み込み中…
Trusting Autonomous Teammates in Human-AI Teams - A Literature Review
説明

As autonomous AI agents become increasingly integrated into human teams, the level of trust humans place in these agents - both as a piece of technology and increasingly viewed as teammates - significantly impacts the success of human-AI teams (HATs). This work presents a literature review of the HAT research that investigates humans' trust in their AI teammates. In this review, we first identify the ways in which trust was conceptualized and operationalized, which underscores the pressing need for clear definitions and consistent measurements. Then, we categorize and quantify the factors found to influence trust in an AI teammate, highlighting that agent-related factors (such as transparency, reliability) have the strongest impacts on trust in HAT research. We also identify under-explored factors related to humans, teams, and environments, and gaps for future HAT research and design.

日本語まとめ
読み込み中…
読み込み中…
Robots, Chatbots, Self-Driving Cars: Perceptions of Mind and Morality Across Artificial Intelligences
説明

AI systems have rapidly advanced, diversified, and proliferated, but our knowledge of people’s perceptions of mind and morality in them is limited, despite its importance for outcomes such as whether people trust AIs and how they assign responsibility for AI-caused harms. In a preregistered online study, 975 participants rated 26 AI and non-AI entities. Overall, AIs were perceived to have low-to-moderate agency (e.g., planning, acting), between inanimate objects and ants, and low experience (e.g., sensing, feeling). For example, ChatGPT was rated only as capable of feeling pleasure and pain as a rock. The analogous moral faculties, moral agency (doing right or wrong) and moral patiency (being treated rightly or wrongly) were higher and more varied, particularly moral agency: The highest-rated AI, a Tesla Full Self-Driving car, was rated as morally responsible for harm as a chimpanzee. We discuss how design choices can help manage perceptions, particularly in high-stakes moral contexts.

日本語まとめ
読み込み中…
読み込み中…
"It’s Not the AI’s Fault Because It Relies Purely on Data": How Causal Attributions of AI Decisions Shape Trust in AI Systems
説明

Humans naturally seek to identify causes behind outcomes through causal attribution, yet Human-AI research often overlooks how users perceive causality behind AI decisions. We examine how this perceived locus of causality—internal or external to the AI—influences trust, and how decision stakes and outcome favourability moderate this relationship. Participants (N=192) engaged with AI-based decision-making scenarios operationalising varying loci of causality, stakes, and favourability, evaluating their trust in each AI. We find that internal attributions foster lower trust as participants perceive the AI to have high autonomy and decision-making responsibility. Conversely, external attributions portray the AI as merely "a tool" processing data, reducing its perceived agency and distributing responsibility, thereby boosting trust. Moreover, stakes moderate this relationship—external attributions foster even more trust in lower-risk, low-stakes scenarios. Our findings establish causal attribution as a crucial yet underexplored determinant of trust in AI, highlighting the importance of accounting for it when researching trust dynamics.

日本語まとめ
読み込み中…
読み込み中…