Loading...

Media is loading
 

Paper Number

2160

Paper Type

Short

Abstract

Algorithmic decision-making is increasingly being used by companies to reduce costs and enable their employees to make better-informed decisions. However, algorithmic systems can be biased, resulting in decisions that systematically disadvantage certain groups. This research aims to examine how users of algorithmic systems respond to algorithmic bias. Furthermore, we aim to analyze how users of algorithmic systems morally reason about algorithmic bias. In this short paper, we propose a theoretical model of two moral reasoning processes – moral decoupling and moral disengagement – and outline a research design to study how these reasoning processes impact users’ continued use of algorithmic systems. Findings from our pilot study suggest that users follow diverse usage patterns when faced with algorithmic bias, with most participants discontinuing using the algorithmic system. Our research contributes to the literature on algorithmic bias by providing insights into how users of algorithmic systems respond to and morally reason about algorithmic bias.

Comments

12-ImplAndAdopt

Share

COinS
 
Dec 15th, 12:00 AM

Exploring Users’ Moral Reasoning Processes and their Impact on the Continued Use of Algorithmic Systems

Algorithmic decision-making is increasingly being used by companies to reduce costs and enable their employees to make better-informed decisions. However, algorithmic systems can be biased, resulting in decisions that systematically disadvantage certain groups. This research aims to examine how users of algorithmic systems respond to algorithmic bias. Furthermore, we aim to analyze how users of algorithmic systems morally reason about algorithmic bias. In this short paper, we propose a theoretical model of two moral reasoning processes – moral decoupling and moral disengagement – and outline a research design to study how these reasoning processes impact users’ continued use of algorithmic systems. Findings from our pilot study suggest that users follow diverse usage patterns when faced with algorithmic bias, with most participants discontinuing using the algorithmic system. Our research contributes to the literature on algorithmic bias by providing insights into how users of algorithmic systems respond to and morally reason about algorithmic bias.

When commenting on articles, please be friendly, welcoming, respectful and abide by the AIS eLibrary Discussion Thread Code of Conduct posted here.