Algorithmic Trust and Censorship

会議の名前
CHI 2024
Dealing with Uncertainty: Understanding the Impact of Prognostic Versus Diagnostic Tasks on Trust and Reliance in Human-AI Decision Making
要旨

While existing literature has explored and revealed several insights pertaining to the role of human factors (e.g., prior experience, domain knowledge) and attributes of AI systems (e.g., accuracy, trustworthiness), there is a limited understanding around how the important task characteristics of complexity and uncertainty shape human decision-making and human-AI team performance. In this work, we aim to address this research and empirical gap by systematically exploring how task complexity and uncertainty influence human-AI decision-making. Task complexity refers to the load of information associated with a task, while task uncertainty refers to the level of unpredictability associated with the outcome of a task. We conducted a between-subjects user study (N = 258) in the context of a trip-planning task to investigate the impact of task complexity and uncertainty on human trust and reliance on AI systems. Our results revealed that task complexity and uncertainty have a significant impact on user reliance on AI systems. When presented with complex and uncertain tasks, users tended to rely more on AI systems while demonstrating lower levels of appropriate reliance compared to tasks that were less complex and uncertain. In contrast, we found that user trust in the AI systems was not influenced by task complexity and uncertainty. Our findings can help inform the future design of empirical studies exploring human-AI decision-making. Insights from this work can inform the design of AI systems and interventions that are better aligned with the challenges posed by complex and uncertain tasks. Finally, the lens of diagnostic versus prognostic tasks can inspire the operationalization of uncertainty in human-AI decision-making studies.

著者
Sara Salimzadeh
Delft University of Technology, Delft, Netherlands
Gaole He
Delft University of Technology, Delft, Netherlands
Ujwal Gadiraju
Delft University of Technology, Delft, Netherlands
論文URL

https://doi.org/10.1145/3613904.3641905

動画
Impact of Model Interpretability and Outcome Feedback on Trust in AI
要旨

This paper bridges the gap in Human-Computer Interaction (HCI) research by comparatively assessing the effects of interpretability and outcome feedback on user trust and collaborative performance with AI. Through novel pre-registered experiments (N=1,511 total participants) using an interactive prediction task, we analyzed how interpretability and outcome feedback influence users’ task performance and trust in AI. The results counter the widespread belief that interpretability drives trust, showing that interpretability led to no robust improvements in trust and that outcome feedback had a significantly greater and more reliable effect. However, both factors had modest effects on participants’ task performance. These findings suggest that (1) interpretability may be less effective at increasing trust than factors like outcome feedback, and (2) augmenting human performance via AI systems may not be a simple matter of increasing trust in AI, as increased trust is not always associated with equally sizable performance improvements. Our exploratory analyses further delve into the mechanisms underlying this trust-performance paradox. These findings present an opportunity for research to focus not only on methods for generating interpretations but also on techniques that ensure interpretations impact trust and performance in practice.

著者
Daehwan Ahn
University of Georgia, Athens, Georgia, United States
Abdullah Almaatouq
MIT, Cambriedge, Massachusetts, United States
Monisha Gulabani
Amazon, Seattle, Washington, United States
Kartik Hosanagar
University of Pennsylvania, Philadelphia, Pennsylvania, United States
論文URL

https://doi.org/10.1145/3613904.3642780

動画
Exposed or Erased: Algorithmic Censorship of Nudity in Art
要旨

The intersection between art and technology poses new challenges for creative expression in the digital space. This paper investigates the algorithmic censorship of artistic nudity in social platforms by means of a qualitative study via semi-structured interviews with 14 visual artists who have experienced censorship online. We explore the professional, emotional, financial and artistic consequences of content removal or shadow-banning. Focusing on the concept of artistic nudity, our findings emphasize the significant impact on artists of the algorithmic censorship of art, the need to consider art as a special case to safeguard the freedom of expression, the importance of education, the limitations of today's content moderation algorithms and the pressing need for transparency and recourse mechanisms. We advocate for a multi-stakeholder governance model conducive to a more supportive, safer and inclusive online environment that respects and nurtures human creativity.

著者
Piera Riccio
ELLIS Alicante, Alicante, Spain
Thomas Hofmann
ETH Zurich , Zurich, Bitte wählen Sie Region, Land oder Bundesland, Switzerland
Nuria Oliver
Research , Alicante, Alicante, Spain
論文URL

https://doi.org/10.1145/3613904.3642586

動画
Trust in AI-assisted Decision Making: Perspectives from Those Behind the System and Those for Whom the Decision is Made
要旨

Trust between humans and AI in the context of decision-making has acquired an important role in public policy, research and industry. In this context, Human-AI Trust has often been tackled from the lens of cognitive science and psychology, but lacks insights from the stakeholders involved. In this paper, we conducted semi-structured interviews with 7 AI practitioners and 7 decision subjects from various decision domains. We found that 1) interviewees identified the prerequisites for the existence of trust and distinguish trust from trustworthiness, reliance, and compliance; 2) trust in AI-integrated systems is strongly influenced by other human actors, more than the system's features; 3) the role of Human-AI trust factors is stakeholder-dependent. These results provide clues for the design of Human-AI interactions in which trust plays a major role, as well as outline new research directions in Human-AI Trust.

著者
Oleksandra Vereschak
Sorbonne Université, CNRS, ISIR, Paris, France
Fatemeh Alizadeh
University of Siegen, Siegen, Germany
Gilles Bailly
Sorbonne Université, CNRS, ISIR, Paris, France
Baptiste Caramiaux
Sorbonne Université, CNRS, ISIR, Paris, France
論文URL

https://doi.org/10.1145/3613904.3642018

動画
Understanding Public Perceptions of AI Conversational Agents: A Cross-Cultural Analysis
要旨

Conversational Agents (CAs) have increasingly been integrated into everyday life, sparking significant discussions on social media. While previous research has examined public perceptions of AI in general, there is a notable lack in research focused on CAs, with fewer investigations into cultural variations in CA perceptions. To address this gap, this study used computational methods to analyze about one million social media discussions surrounding CAs and compared people's discourses and perceptions of CAs in the US and China. We find Chinese participants tended to view CAs hedonically, perceived voice-based and physically embodied CAs as warmer and more competent, and generally expressed positive emotions. In contrast, US participants saw CAs more functionally, with an ambivalent attitude. Warm perception was a key driver of positive emotions toward CAs in both countries. We discussed practical implications for designing contextually sensitive and user-centric CAs to resonate with various users' preferences and needs.

著者
Zihan Liu
National University of Singapore, Singapore, Singapore
Han Li
National University of Singapore, Singapore, Singapore
Anfan Chen
The Hong Kong Baptist University, Hong Kong, Hong Kong
Renwen Zhang
National University of Singapore, Singapore, Singapore
YI-CHIEH LEE
National University of Singapore, Singapore, Singapore
論文URL

https://doi.org/10.1145/3613904.3642840

動画