Loading...
Paper Type
ERF
Paper Number
1251
Description
As AI becomes embedded in everyday professional and personal tasks, the need for accessible and understandable explanations for AI decisions is compelling. Accordingly, firms are developing solutions (either voluntarily or through government mandate) to provide explanations of AI decision-making to those who are subject the consequences of the decisions. Given the recency of this progress, limited research has been conducted as to how the introduction of these explanations may alter the relationship between humans, agents, model findings, and outcomes. First, humans have a long history of “outsmarting” systems that affect their welfare. Second, humans are not passive recipients of the explanations for model decision-making but will incorporate new information into future actions. Our research interest is examining how humans come to evaluate AI decision-making without explicit explanations and how contingencies in explanations alter the human’s behavior and model operations.
Recommended Citation
Slaughter, Kelly and Preston, David, "Interpretable Models & Metabehaviors: A Proposed Study of Microlending" (2021). AMCIS 2021 Proceedings. 6.
https://aisel.aisnet.org/amcis2021/art_intel_sem_tech_intelligent_systems/art_intel_sem_tech_intelligent_systems/6
Interpretable Models & Metabehaviors: A Proposed Study of Microlending
As AI becomes embedded in everyday professional and personal tasks, the need for accessible and understandable explanations for AI decisions is compelling. Accordingly, firms are developing solutions (either voluntarily or through government mandate) to provide explanations of AI decision-making to those who are subject the consequences of the decisions. Given the recency of this progress, limited research has been conducted as to how the introduction of these explanations may alter the relationship between humans, agents, model findings, and outcomes. First, humans have a long history of “outsmarting” systems that affect their welfare. Second, humans are not passive recipients of the explanations for model decision-making but will incorporate new information into future actions. Our research interest is examining how humans come to evaluate AI decision-making without explicit explanations and how contingencies in explanations alter the human’s behavior and model operations.
When commenting on articles, please be friendly, welcoming, respectful and abide by the AIS eLibrary Discussion Thread Code of Conduct posted here.