Loading...

Media is loading
 

Abstract

Artificial Intelligence (AI) is a tool that augments various facets of decision making. This disruptive technology is helping humans perform better and faster with accuracy (Grigsby 2018). There are tasks where AI decides in real-time without human intervention. For example, AI can approve or decline a credit card application without any human intervention. On the other hand, there are tasks where both AI and human reasoning is required to make the decision. For instance, automated employee selection decision requires a higher level of human involvement. Interaction between humans and machines is required in such decisions. Grigsby (2018) posits that the interaction becomes effective when the machine understands human and human understands machine. This interplay is called human-machine symbiosis that merges the best of the human with the best of the machine. The human decision-makers need to understand how the machine is reaching to a specific prediction. One tool that facilitates this understanding by increasing the interpretability of the algorithm is Explainable AI (XAI). XAI is a tool that explains the results to the decision-maker in a human-understandable manner (Rai 2020). As a result, the decision is more transparent and fairer. Other than the benefits of transparency and fairness, there is an emerging regulatory requirement for explaining machine-driven decisions. The General Data Protection Regulation addresses the right to explanation by enabling the individuals to ask for an explanation for algorithm’s output (Selbst and Powles 2017). That is why the decision-makers need to convert their decision-making tool from a black box to a glass box. To enhance the explainability and interpretability, two broad categories of XAI techniques are model-specific XAI and model-agnostic XAI (Rai 2020). The model-specific techniques incorporate interpretability in the inherent structure of the learning model whereas the model-agnostic techniques use the learning model as an input to generate explanation. These models ensure transparency and fairness in human-machine decision making. Another important factor for effective human-machine symbiosis is decision task complexity (Grigsby 2018). Task complexity in decision making can be characterized by the number of desired outcomes, conflicting interdependencies among outcomes, path multiplicity, and uncertainty (Campbell 1988). When the decision-making task is unstructured and complicated, then the decision-maker’s need for understanding the algorithmic process increases. Moreover, decision task complexity is a factor of trust in the autonomous system, and trust is a factor of human-machine symbiosis (Grigsby 2018). Furthermore, decision task complexity is related to the mental workload and cognitive ability of the decision-makers (Grigsby 2018; Speier and Morris 2003). In the extant literature, there is a gap in explaining how the interplay between XAI techniques and decision task complexity impacts the decision makers perception about the human-machine symbiosis. Therefore, the objective of this research is to investigate the effect of XAI and decision task complexity on perceived human-machine symbiosis. Using the theories of information overload and algorithmic transparency, we develop a causal model to explain the relationship. We will run a randomized 2×2 factorial experiment to test the model. The paper will have theoretical and practical implications.

Abstract Only

Share

COinS
 

When commenting on articles, please be friendly, welcoming, respectful and abide by the AIS eLibrary Discussion Thread Code of Conduct posted here.