AIS Transactions on Human-Computer Interaction
Abstract
The technologies that we have come to know as artificial intelligence (AI), such as machine learning, deep learning, computer vision, and natural language processing, are becoming general-purpose tools that significantly impact organizational and societal economic and social structures. However, that impact has not been entirely positive. We have already seen many projects where undesirable or negative consequences of AI systems have harmed their respective organizations in social, financial, and legal spheres. In this study, we examine common intended objectives and risk factors that lead to negative consequences in AI. Using a qualitative approach, we propose a unifying theoretical framework for negative consequences in AI projects. We analyzed 840 quotes from key informants about 30 unique AI projects using multiple news articles for each project. We identified intended objectives for implementing AI systems that lead to negative consequences through various linking risk factors.
DOI
10.17705/1thci.00203
Recommended Citation
Sharma, M.,
Biros, D.,
Baham, C.,
&
Biros, J.
(2024).
What Went Wrong? Identifying Risk Factors for Popular Negative Consequences in AI.
AIS Transactions on Human-Computer Interaction, 16(2), 139-176.
https://doi.org/10.17705/1thci.00203
DOI: 10.17705/1thci.00203
When commenting on articles, please be friendly, welcoming, respectful and abide by the AIS eLibrary Discussion Thread Code of Conduct posted here.