Paper Type
ERF
Abstract
Effective risk assessment is paramount for responsible generative AI (GenAI) deployment. Traditional governance approaches that rely on manual reviews are inadequate given the scale and velocity of GenAI outputs. A risk-based approach incorporating real-time monitoring and governance is paramount. In this research, we examine how the efficacy of suggestive versus supportive explanations for AI’s risk assessment of GenAI outputs is moderated by user domain expertise and AI’s risk assessment in determining user acceptance. We hypothesize that cognitive involvement increases with AI’s risk assessment, with higher risks triggering more critical evaluation. By drawing on the elaboration likelihood model, we hypothesize that supportive explanations have a greater effect on experts and suggestive explanations have a greater effect on novices. We also hypothesize that as AI’s assessed risk increases, the reliance of experts and novices on supportive explanations increases. This research provides insight into the efficacy of explanation style for AI governance systems.
Paper Number
2185
Recommended Citation
Wu Young, Jiaqi and Nah, Fiona, "AI-Assisted Risk Assessment in Generative AI Governance" (2025). AMCIS 2025 Proceedings. 48.
https://aisel.aisnet.org/amcis2025/intelfuture/intelfuture/48
AI-Assisted Risk Assessment in Generative AI Governance
Effective risk assessment is paramount for responsible generative AI (GenAI) deployment. Traditional governance approaches that rely on manual reviews are inadequate given the scale and velocity of GenAI outputs. A risk-based approach incorporating real-time monitoring and governance is paramount. In this research, we examine how the efficacy of suggestive versus supportive explanations for AI’s risk assessment of GenAI outputs is moderated by user domain expertise and AI’s risk assessment in determining user acceptance. We hypothesize that cognitive involvement increases with AI’s risk assessment, with higher risks triggering more critical evaluation. By drawing on the elaboration likelihood model, we hypothesize that supportive explanations have a greater effect on experts and suggestive explanations have a greater effect on novices. We also hypothesize that as AI’s assessed risk increases, the reliance of experts and novices on supportive explanations increases. This research provides insight into the efficacy of explanation style for AI governance systems.
When commenting on articles, please be friendly, welcoming, respectful and abide by the AIS eLibrary Discussion Thread Code of Conduct posted here.
Comments
IntelFuture