Loading...
Paper Number
1920
Paper Type
Completed
Description
We theorize why some artificial intelligence (AI) algorithms unexpectedly treat protected classes unfairly. We hypothesize that mechanisms by which AI assumes agencies, rights, and responsibilities of its stakeholders can affect AI bias by increasing complexity and irreducible uncertainties: e.g., AI’s learning method, anthropomorphism level, stakeholder utility optimization approach, and acquisition mode (make, buy, collaborate). In a sample of 726 agentic AI, we find that unsupervised and hybrid learning methods increase the likelihood of AI bias, whereas “strict” supervised learning reduces it. Highly anthropomorphic AI increases the likelihood of AI bias. Using AI to optimize one stakeholder’s utility increases AI bias risk, whereas jointly optimizing the utilities of multiple stakeholders reduces it. User organizations that co-create AI with developer organizations instead of developing it in-house or acquiring it off-the-shelf reduce AI bias risk. The proposed theory and the findings advance our understanding of responsible development and use of agentic AI.
Recommended Citation
Tanriverdi, Hüseyin; Akinyemi, John-Patrick; and Terrence, Neumann,, "Mitigating Bias in Organizational Development and Use of Artificial Intelligence" (2023). ICIS 2023 Proceedings. 19.
https://aisel.aisnet.org/icis2023/hti/hti/19
Mitigating Bias in Organizational Development and Use of Artificial Intelligence
We theorize why some artificial intelligence (AI) algorithms unexpectedly treat protected classes unfairly. We hypothesize that mechanisms by which AI assumes agencies, rights, and responsibilities of its stakeholders can affect AI bias by increasing complexity and irreducible uncertainties: e.g., AI’s learning method, anthropomorphism level, stakeholder utility optimization approach, and acquisition mode (make, buy, collaborate). In a sample of 726 agentic AI, we find that unsupervised and hybrid learning methods increase the likelihood of AI bias, whereas “strict” supervised learning reduces it. Highly anthropomorphic AI increases the likelihood of AI bias. Using AI to optimize one stakeholder’s utility increases AI bias risk, whereas jointly optimizing the utilities of multiple stakeholders reduces it. User organizations that co-create AI with developer organizations instead of developing it in-house or acquiring it off-the-shelf reduce AI bias risk. The proposed theory and the findings advance our understanding of responsible development and use of agentic AI.
When commenting on articles, please be friendly, welcoming, respectful and abide by the AIS eLibrary Discussion Thread Code of Conduct posted here.
Comments
09-HCI