ACIS 2024 Proceedings

Abstract

The rapid growth of AI technologies across industries offers significant productivity gains but also introduces notable risks and ethical challenges. As AI system are usually opaque and difficult for users to understand, this can erode users’ trust in the system and make them hesitant to engage. Recent studies have highlighted the importance of AI transparency as one of the core principles of responsible AI, specifically in building trust and ensuring accountability. While AI transparency has been mostly voluntary, more governments are considering mandates. This research analyzes organizational perspectives on mandating AI transparency by examining public consultations to the Australian government’s “Safe and responsible AI” discussion paper and employing both manual and automated thematic analysis with large language models (LLMs). The study aims to advance understanding of AI transparency as a socio-technical phenomenon and inform policymakers and industry leaders with practical insights.

Share

COinS