Paper ID

3156

Description

Researchers increasingly acknowledge that algorithms can exhibit bias, but artificial intelligence (AI) is increasingly integrated into the organizational decision-making process. How does biased AI shape human choices? We consider a sequential AI-human decision that mirrors organizational decisions; an automated system provides a score and then a human decides a score using their discretion. We conduct an AMT survey and ask participants to assign one of two types of scores: a subjective, context-dependent measure (Beauty) and objective, observer-independent measure (Age). Participants are either shown the AI score, shown the AI score and its error, or not shown the AI score. We find that participants without knowledge of the AI score do not exhibit bias; however, knowing the AI scores for the subjective measure induces bias in the participants’ scores due to the anchoring effect. Although participants’ scores do not display bias, participants who receive information about the AI error rates devalue the AI score and reduce their error. This study makes several contributions to the information systems literature. First, this paper provides a novel way to discuss artificial intelligence bias by distinguishing between subjective and objective measures. Second, this paper highlights the potential spillover effects from algorithmic bias into human decisions. If biased artificial intelligence anchors human decisions, then it can induce bias into previously unbiased scores. Third, we examine a method to encourage participants to reduce their reliance on the artificial intelligence, reporting the error rate, and find evidence that it is effective for the objective measure.

Share

COinS
 

Beauty’s in the AI of the Beholder: How AI Anchors Subjective and Objective Predictions

Researchers increasingly acknowledge that algorithms can exhibit bias, but artificial intelligence (AI) is increasingly integrated into the organizational decision-making process. How does biased AI shape human choices? We consider a sequential AI-human decision that mirrors organizational decisions; an automated system provides a score and then a human decides a score using their discretion. We conduct an AMT survey and ask participants to assign one of two types of scores: a subjective, context-dependent measure (Beauty) and objective, observer-independent measure (Age). Participants are either shown the AI score, shown the AI score and its error, or not shown the AI score. We find that participants without knowledge of the AI score do not exhibit bias; however, knowing the AI scores for the subjective measure induces bias in the participants’ scores due to the anchoring effect. Although participants’ scores do not display bias, participants who receive information about the AI error rates devalue the AI score and reduce their error. This study makes several contributions to the information systems literature. First, this paper provides a novel way to discuss artificial intelligence bias by distinguishing between subjective and objective measures. Second, this paper highlights the potential spillover effects from algorithmic bias into human decisions. If biased artificial intelligence anchors human decisions, then it can induce bias into previously unbiased scores. Third, we examine a method to encourage participants to reduce their reliance on the artificial intelligence, reporting the error rate, and find evidence that it is effective for the objective measure.