PACIS 2022 Proceedings


Media is loading

Paper Number



Artificial intelligence (AI) applications in health care, education, finance, mining, communications, and arts have brought about rapid and dramatic advances in these fields (Hyder et al., 2019). The rapidly expanding potential of AI in the economy and society has raised a set of legal and ethical issues (Wang and Siau, 2019; Siau and Wang, 2020). In-depth research into the ethical and legal aspects of AI to enable policymakers to introduce effective legislation and regulate AI development and applications is needed. This research uses a systematic qualitative research methodology, Value-Focused Thinking (VFT) (Keeney, 1996; Sheng et al., 2007), to identify the fundamental and means objectives as well as the relationships between the objectives. The means objectives are differentiated from fundamental objectives by using the "Why is it important?" test. If one objective is important because it helps to achieve another objective, it is a means objective. Otherwise, it is a fundamental objective. The means-ends objective network derived from this research can provide meaningful guidance for researchers and practitioners to understand the legal and ethical values in AI for policymaking by legal professionals. In VFT, values are what we care about, and values are principles used for evaluation (Keeney, 1996). Values are used to evaluate the actual or potential consequences of action and inaction of proposed AI alternatives and decisions. Values are represented as objectives in the means-ends objective network. The interviewees (i.e., subjects) for this research will be legal professionals (e.g., lawyers and judges) and policymakers. We will interview each of them individually and ask questions to solicit the values that he or she believes are important in the legal and ethical aspects of AI. When the interviewees do not generate any further new concepts (i.e., point of saturation), we consolidate their list of raw concepts at a more abstract level to derive objectives in the form of theoretical constructs. The consolidated list of objectives and their relationships describe the legal and ethical values of AI. The significance of this stream of research is clear. Elon Musk indicated that "The danger of AI is much greater than the danger of nuclear warheads by a lot..." Stephen Hawking said, "We are at the most dangerous moment in the development of humanity... the rise of artificial intelligence is likely to extend job destruction deep into the middle classes, with only the most caring, creative, or supervisory roles remaining." Research into the legal and ethical aspects of AI is urgently needed to develop legal regulations to guide AI development and restrict unethical and dangerous AI operations and applications. In addition to the practical contributions, this research produces a means-ends objective network that provides a conceptual framework to guide future research in this domain.



When commenting on articles, please be friendly, welcoming, respectful and abide by the AIS eLibrary Discussion Thread Code of Conduct posted here.