AI certification has emerged as a promising mechanism to enhance transparency, accountability, and public trust. However, end-user perspectives remain largely unexplored. This study investigates two groups with differing AI expertise. Through qualitative interviews with 30 participants (15 experts, 15 lay users), we examined how AI certification influences trust, who should conduct it, transparency needs, post-certification monitoring, and certification fraud. Results reveal key differences between the two groups. Lay users perceive AI certification more positively than experts. Both groups prefer independent certifiers, with experts being more open to certification by private companies. Experts favor post-certification monitoring tied to system updates, whereas lay users prefer annual checks. Both groups value transparency, but the specific details they require differ. Regarding fraudulent AI certification, experts emphasize technical safeguards, while lay users focus on legal enforcement. The study discusses the implications of its findings and offers several recommendations for improving AI certification schemes.
ACM CHI Conference on Human Factors in Computing Systems