Artificial intelligence (AI) is becoming increasingly popular and intelligent systems are deployed for various use cases. However, as these systems typically rely on complex machine learning methods, they effectively exemplify black boxes. Thus, consumers are usually not informed about inner workings of these systems, e.g., data sources or feature importance. Public and private institutions have already called for fairness and transparency standards regarding intelligent systems. Although researchers develop mechanisms to ensure transparency of intelligent systems, it remains an open question how consumers perceive such transparency features. Consequently, our study examines to what extent consumers are willing to pay for these features, and what the underlying mechanisms of the purchase decision are. To answer these questions, we conduct an experiment and a subsequent survey in the context of credit scoring. We show that consumers exhibit significant willingness to pay for transparency. Furthermore, we observe that increased trust in the intelligent system caused by enhanced perceived transparency is the main driver for positive evaluation of transparency features. Our findings inform practitioners about the relevance of “fair AI” and manifest the importance of transparency research regarding intelligent systems in social sciences.
Peters, Felix; Pumplun, Luisa; and Buxmann, Peter, "Opening the Black Box: Consumer's Willingness to Pay for Transparency of Intelligent Systems" (2020). ECIS 2020 Research Papers. 90.
When commenting on articles, please be friendly, welcoming, respectful and abide by the AIS eLibrary Discussion Thread Code of Conduct posted here.