•  
  •  
 
Journal of the Association for Information Systems

Abstract

As artificial intelligence (AI) becomes more pervasive, humans will interact with autonomous agents more frequently and in deeper ways. While there is a significant body of work addressing the interface between a single human and a single AI agent, less is known about how individuals react to AI when they are part of human-agent hybrids, namely multiple humans and potentially multiple AI. These hybrid forms are unique in that advice is often given simultaneously, i.e., a human decision maker evaluates advice from other humans and algorithms at the same time. This scenario presents a boundary condition on the extant literature, as it is unclear how a human decision maker will differentially appraise a human advisor compared to an algorithmic advisor when their advice is simultaneous. This study presents the results of three experiments asking individuals to estimate property rental prices with the support of both human and algorithmic advice. We tested whether explicitly labeling an advisor as an algorithm rather than a human impacts how individuals perceive both the algorithm and another human advisor. We also examined the role of conflicting advice during simultaneous evaluation. Based on the results of 904 participants, we found that labeling an advisor as an algorithm resulted in a significantly significant algorithmic appreciation bias, even when an equivalent human was present. Further, we found that uncertainty induced by conflicting information weakened the appreciation effect, while agreement among advisors resulted in the strongest behavioral responses.

DOI

10.17705/1jais.00896

Share

COinS
 

When commenting on articles, please be friendly, welcoming, respectful and abide by the AIS eLibrary Discussion Thread Code of Conduct posted here.