Certified AI System = Trustworthy? Exploring Expert and Lay User Perceptions and Needs Regarding AI Certification

要旨

AI certification has emerged as a promising mechanism to enhance transparency, accountability, and public trust. However, end-user perspectives remain largely unexplored. This study investigates two groups with differing AI expertise. Through qualitative interviews with 30 participants (15 experts, 15 lay users), we examined how AI certification influences trust, who should conduct it, transparency needs, post-certification monitoring, and certification fraud. Results reveal key differences between the two groups. Lay users perceive AI certification more positively than experts. Both groups prefer independent certifiers, with experts being more open to certification by private companies. Experts favor post-certification monitoring tied to system updates, whereas lay users prefer annual checks. Both groups value transparency, but the specific details they require differ. Regarding fraudulent AI certification, experts emphasize technical safeguards, while lay users focus on legal enforcement. The study discusses the implications of its findings and offers several recommendations for improving AI certification schemes.

著者
Sarah Abdelwahab Gaballah
Ruhr University Bochum, Bochum, Germany
Nur Efsan Cetinkaya
University of Duisburg-Essen, Essen, Germany
Magdalena Wischnewski
University of Duisburg-Essen, Essen, Germany
Martina Sasse
Ruhr University Bochum, Bochum, Germany

会議: CHI 2026

ACM CHI Conference on Human Factors in Computing Systems

セッション: Critical Reflections on AI

P1 - Room 121
7 件の発表
2026-04-14 20:15:00
2026-04-14 21:45:00