Abstract

The opaque nature of algorithms has led to negative consequences like discriminative and unfair decisions. Understanding these consequences requires consideration of algorithmic decision making from different stakeholder perspective (e.g. organizational vs. customer). We examine how explanations and evaluation metrics influence consequences of algorithmic decision making by prompting users to adopt different stakeholder perspectives. We specifically, examine the role of factual vs counterfactual explanations and framing of evaluation metrics impacts decision outcomes of choice, perceived fairness and confidence. We propose an experiment designed to test our hypotheses of the effects of counterfactual explanations and frames emphasizing false negatives rates on decision outcomes.

Share

COinS
 

When commenting on articles, please be friendly, welcoming, respectful and abide by the AIS eLibrary Discussion Thread Code of Conduct posted here.