Paper Type

ERF

Abstract

Effective risk assessment is paramount for responsible generative AI (GenAI) deployment. Traditional governance approaches that rely on manual reviews are inadequate given the scale and velocity of GenAI outputs. A risk-based approach incorporating real-time monitoring and governance is paramount. In this research, we examine how the efficacy of suggestive versus supportive explanations for AI’s risk assessment of GenAI outputs is moderated by user domain expertise and AI’s risk assessment in determining user acceptance. We hypothesize that cognitive involvement increases with AI’s risk assessment, with higher risks triggering more critical evaluation. By drawing on the elaboration likelihood model, we hypothesize that supportive explanations have a greater effect on experts and suggestive explanations have a greater effect on novices. We also hypothesize that as AI’s assessed risk increases, the reliance of experts and novices on supportive explanations increases. This research provides insight into the efficacy of explanation style for AI governance systems.

Paper Number

2185

Author Connect URL

https://authorconnect.aisnet.org/conferences/AMCIS2025/papers/2185

Comments

IntelFuture

Author Connect Link

Share

COinS
 
Aug 15th, 12:00 AM

AI-Assisted Risk Assessment in Generative AI Governance

Effective risk assessment is paramount for responsible generative AI (GenAI) deployment. Traditional governance approaches that rely on manual reviews are inadequate given the scale and velocity of GenAI outputs. A risk-based approach incorporating real-time monitoring and governance is paramount. In this research, we examine how the efficacy of suggestive versus supportive explanations for AI’s risk assessment of GenAI outputs is moderated by user domain expertise and AI’s risk assessment in determining user acceptance. We hypothesize that cognitive involvement increases with AI’s risk assessment, with higher risks triggering more critical evaluation. By drawing on the elaboration likelihood model, we hypothesize that supportive explanations have a greater effect on experts and suggestive explanations have a greater effect on novices. We also hypothesize that as AI’s assessed risk increases, the reliance of experts and novices on supportive explanations increases. This research provides insight into the efficacy of explanation style for AI governance systems.

When commenting on articles, please be friendly, welcoming, respectful and abide by the AIS eLibrary Discussion Thread Code of Conduct posted here.