Paper Number

1002

Paper Type

Complete

Description

AI certification appears essential for managing AI risks. However, it remains unclear how AI certifications need to be designed to improve users' perceptions of fairness, trustworthiness, and risk associated with AI applications. We developed an experimental design involving 203 German participants in high- and low-risk scenarios, including two conjoint experiments. We observed differences in users' perceived trustworthiness across scenarios with respect to AI certification attributes. Our findings indicate that the most relevant design factor contributing to perceived trustworthiness is the selection of the certifying organization. Certification frequency and certification criteria have a moderate impact on perceived trustworthiness. Code of conduct and self-certification have a negligible impact. For high-risk use cases, users perceive certifications of the actual AI application as more trustworthy than certifications of the AI development process or the overarching AI management system. For low-risk AI applications, certifications of the broader AI management system appear to be the most effective.

Comments

18-Govern

Share

COinS
 
Dec 15th, 12:00 AM

How to Certify AI: Communicating Fairness, Trustworthiness, and Reduced Risk Through AI Certification

AI certification appears essential for managing AI risks. However, it remains unclear how AI certifications need to be designed to improve users' perceptions of fairness, trustworthiness, and risk associated with AI applications. We developed an experimental design involving 203 German participants in high- and low-risk scenarios, including two conjoint experiments. We observed differences in users' perceived trustworthiness across scenarios with respect to AI certification attributes. Our findings indicate that the most relevant design factor contributing to perceived trustworthiness is the selection of the certifying organization. Certification frequency and certification criteria have a moderate impact on perceived trustworthiness. Code of conduct and self-certification have a negligible impact. For high-risk use cases, users perceive certifications of the actual AI application as more trustworthy than certifications of the AI development process or the overarching AI management system. For low-risk AI applications, certifications of the broader AI management system appear to be the most effective.