While regulatory frameworks call for the implementation of AI certifications, empirical knowledge about how such certifications affect interactions is still scarce. In this work, we examined how AI certifications affect users' trust and reliance. In addition, we examined whether certifications elevate user expectations and whether unmet expectations subsequently reduce trust. In a 2 (certification vs no certification) x 2 (reliability: high vs low) between-subjects online study, N = 644 participants had to identify bacterial infestation in pictures with the help of an AI. Our results show that, before interacting with the AI, participants trusted the certified system more and showed reduced vigilance. However, these effects disappeared post-interaction, where, instead of the certification, system reliability significantly affected trust and vigilance. Notably, certifications did not raise expectations per se, but instead amplified the impact of system reliability on user trust. Additional exploratory results showed that the certification supported appropriate reliance.
ACM CHI Conference on Human Factors in Computing Systems