Abstract

Artificial intelligence (AI) applications have significant potential to stimulate economic growth and improve productivity. At the same time, there is a growing need to verify and regulate the ability of AI systems to support and make decisions responsibly (Inel et al., 2023; Manntymaki et al., 2022). This included defining conceptual frameworks and guidelines to mitigate the possible adverse effects of AI technologies. In October 2023, President Biden issued an executive order for safe, secure, and trustworthy AI (Whitehouse, 2023; NIST, 2024). Responsible AI can be phrased as ethical, trustworthy, or sustainable. The studies on this topic mainly focus on high-level principles such as transparency, justice, fairness, and/or nonmaleficence (Fjied et al., 2020). However, there is a lack of research on the impact of these principles on the actual design and operation of AI applications (Shaw and Zhu, 2022; Inel et al., 2023). This leaves much room for interpretation regarding how we implement and utilize AI applications in specific contexts and domains. Organizations need concrete guidance and implementation practices to bridge the gap between the principle and the design, thereby managing the end-to-end AI system development life cycle effectively, rather than relying on ad-hoc algorithms. To elaborate, there is a growing increase in digital autonomy under digital transformation, with reduced human involvement. Methods will enable software engineers to work with diverse users and communities to select the correct issues, define outcomes, and establish inclusive goals and boundaries for AI. Additionally, AI-generated solutions should inspire confidence in users. The research questions guiding the design and implementation of a responsible AI application motivate the study proposal. For example, the study examines the early phases of system development rather than focusing on outcomes. The system-level and cross-cutting system components are analyzed using software engineering and systems integration to enhance responsible AI features in selected domains. The primary research questions are: What are the design patterns for developing operationally responsible AI applications? Additionally, how can users verify the operation of responsible AI applications? The research first identifies the design patterns that developers used during the requirements phase and the analysis and design processes based on software engineering methods. We then review how users can examine the system’s ethical credentials to check compliance. This will result in a set of design patterns being embedded as product features within a system to create a responsible AI application. In addition, the research will integrate both technical and non-technical measures collected (Chung, 2024) into an operational AI application development. Here, the interaction between various factors will be considered. Finally, the research will build a prototype based on a unified framework that focuses on developing verifiable and responsible AI systems.

Comments

tpp1038

Share

COinS