Certified But Imperfect: Investigating The Role of AI Certifications And System Performance on Trust in And Reliance on AI Systems

要旨

While regulatory frameworks call for the implementation of AI certifications, empirical knowledge about how such certifications affect interactions is still scarce. In this work, we examined how AI certifications affect users' trust and reliance. In addition, we examined whether certifications elevate user expectations and whether unmet expectations subsequently reduce trust. In a 2 (certification vs no certification) x 2 (reliability: high vs low) between-subjects online study, N = 644 participants had to identify bacterial infestation in pictures with the help of an AI. Our results show that, before interacting with the AI, participants trusted the certified system more and showed reduced vigilance. However, these effects disappeared post-interaction, where, instead of the certification, system reliability significantly affected trust and vigilance. Notably, certifications did not raise expectations per se, but instead amplified the impact of system reliability on user trust. Additional exploratory results showed that the certification supported appropriate reliance.

著者
Magdalena Wischnewski
Research Center for Trustworthy Data Science and Security, Dortmund, Germany
Alisa Scharmann
University of Duisburg-Essen, Duisburg, Germany
Annika Ridder
University of Duisburg-Essen, Duisburg, Germany
Nicole Krämer
Social Psychology - Media and Communication, Universität Duisburg-Essen, Duisburg, Germany

会議: CHI 2026

ACM CHI Conference on Human Factors in Computing Systems

セッション: Human Factors in Privacy, Security, and Trust

P1 - Room 117
7 件の発表
2026-04-14 18:00:00
2026-04-14 19:30:00