Abstract

Effective collaboration between humans and artificial intelligence (AI) results in superior decision-making outcomes if human reliance on AI is appropriately calibrated. The emerging research area of human-AI decision-making focuses on empirical methods to explore how humans perceive and act in collaborative environments. While previous studies provide promising insights into reliance on AI systems, the multitude of studies has made it challenging to compare and generalize outcomes. To address this complexity, we use the theoretical lens of task technology fit theory and synthesize study design choices in four meta-characteristics: collaboration, agent, task, and precondition. Our goal is to develop a taxonomy on AI reliance experiment design choices that helps structure research efforts and supports producing generalizable scientific knowledge. Thus, our research has notable contributions to both empirical science in information systems and practical implications for designing AI systems.

Share

COinS