Paper Type

Complete

Abstract

Algorithmic Accountability of Low-Code/No-Code Artificial Intelligence (LCNC AI) presents a significant challenge, as these platforms democratize AI development while diminishing direct oversight. Unlike traditional AI systems, applications built with LCNC AI tools often lack governance structures, increasing risks of bias, opacity, and regulatory non-compliance. Organizations struggle to implement accountability mechanisms as non-technical users deploy AI without comprehensive validation frameworks. This study conducts a Structured Literature Review (SLR) to analyze existing research on algorithmic accountability in LCNC AI. The findings highlight critical risks, governance approaches, and accountability frameworks essential for mitigating ethical and compliance concerns. The study emphasizes the necessity of hybrid governance approaches, integrating organizational oversight with user-driven compliance measures. To bridge research gaps, this study proposes a research agenda aimed at refining ethical and regulatory frameworks for LCNC AI. By providing concrete governance strategies, this study offers practical recommendations for organizations to ensure accountable and responsible LCNC AI deployment.

Paper Number

1922

Author Connect URL

https://authorconnect.aisnet.org/conferences/AMCIS2025/papers/1922

Comments

SIGODIS

Author Connect Link

Share

COinS
 
Aug 15th, 12:00 AM

Algorithmic Accountability of Low-Code/No-Code Artificial Intelligence: A Literature Review

Algorithmic Accountability of Low-Code/No-Code Artificial Intelligence (LCNC AI) presents a significant challenge, as these platforms democratize AI development while diminishing direct oversight. Unlike traditional AI systems, applications built with LCNC AI tools often lack governance structures, increasing risks of bias, opacity, and regulatory non-compliance. Organizations struggle to implement accountability mechanisms as non-technical users deploy AI without comprehensive validation frameworks. This study conducts a Structured Literature Review (SLR) to analyze existing research on algorithmic accountability in LCNC AI. The findings highlight critical risks, governance approaches, and accountability frameworks essential for mitigating ethical and compliance concerns. The study emphasizes the necessity of hybrid governance approaches, integrating organizational oversight with user-driven compliance measures. To bridge research gaps, this study proposes a research agenda aimed at refining ethical and regulatory frameworks for LCNC AI. By providing concrete governance strategies, this study offers practical recommendations for organizations to ensure accountable and responsible LCNC AI deployment.

When commenting on articles, please be friendly, welcoming, respectful and abide by the AIS eLibrary Discussion Thread Code of Conduct posted here.