Paper Number
1002
Paper Type
Complete
Description
AI certification appears essential for managing AI risks. However, it remains unclear how AI certifications need to be designed to improve users' perceptions of fairness, trustworthiness, and risk associated with AI applications. We developed an experimental design involving 203 German participants in high- and low-risk scenarios, including two conjoint experiments. We observed differences in users' perceived trustworthiness across scenarios with respect to AI certification attributes. Our findings indicate that the most relevant design factor contributing to perceived trustworthiness is the selection of the certifying organization. Certification frequency and certification criteria have a moderate impact on perceived trustworthiness. Code of conduct and self-certification have a negligible impact. For high-risk use cases, users perceive certifications of the actual AI application as more trustworthy than certifications of the AI development process or the overarching AI management system. For low-risk AI applications, certifications of the broader AI management system appear to be the most effective.
Recommended Citation
Kahdan, Manoj; Stead, Susan; and Salge, Oliver, "How to Certify AI: Communicating Fairness, Trustworthiness, and Reduced Risk Through AI Certification" (2024). ICIS 2024 Proceedings. 10.
https://aisel.aisnet.org/icis2024/gov_strategy/gov_strategy/10
How to Certify AI: Communicating Fairness, Trustworthiness, and Reduced Risk Through AI Certification
AI certification appears essential for managing AI risks. However, it remains unclear how AI certifications need to be designed to improve users' perceptions of fairness, trustworthiness, and risk associated with AI applications. We developed an experimental design involving 203 German participants in high- and low-risk scenarios, including two conjoint experiments. We observed differences in users' perceived trustworthiness across scenarios with respect to AI certification attributes. Our findings indicate that the most relevant design factor contributing to perceived trustworthiness is the selection of the certifying organization. Certification frequency and certification criteria have a moderate impact on perceived trustworthiness. Code of conduct and self-certification have a negligible impact. For high-risk use cases, users perceive certifications of the actual AI application as more trustworthy than certifications of the AI development process or the overarching AI management system. For low-risk AI applications, certifications of the broader AI management system appear to be the most effective.
Comments
18-Govern