Paper Number
2392
Paper Type
Short
Description
Artificial intelligence, especially based on machine learning, is rapidly transforming business operations and entire industries. However, as many complex machine learning models are considered to be black boxes, both adoption and further reliance on artificial intelligence depends on the ability to understand how these automated models work – a problem known as explainable AI. We propose a novel approach to explainability which leverages conceptual models. Conceptual models are commonly used to capture and integrate domain rules and information requirements for the development of databases and other information technology components. We propose a novel method to embed machine learning models into conceptual models. Specifically, we propose using a Model Embedding Method (MEM), which is based on conceptual models, for increasing the explainability of machine learning models, and illustrate through an application to publicly available mortgage data. This machine learning application predicts whether a mortgage is approved. We show how the explainability of machine learning can be improved by embedding machine learning models into domain knowledge from a conceptual model that represents a mental model of the real world, instead of algorithms. Our results suggest that such domain knowledge can help address some of the challenges of the explainability problem in AI.
Recommended Citation
Maass, Wolfgang; Castellanos, Arturo; Tremblay, Monica; Lukyanenko, Roman; and Storey, Veda C., "AI Explainability: Embedding Conceptual Models" (2022). ICIS 2022 Proceedings. 12.
https://aisel.aisnet.org/icis2022/data_analytics/data_analytics/12
AI Explainability: Embedding Conceptual Models
Artificial intelligence, especially based on machine learning, is rapidly transforming business operations and entire industries. However, as many complex machine learning models are considered to be black boxes, both adoption and further reliance on artificial intelligence depends on the ability to understand how these automated models work – a problem known as explainable AI. We propose a novel approach to explainability which leverages conceptual models. Conceptual models are commonly used to capture and integrate domain rules and information requirements for the development of databases and other information technology components. We propose a novel method to embed machine learning models into conceptual models. Specifically, we propose using a Model Embedding Method (MEM), which is based on conceptual models, for increasing the explainability of machine learning models, and illustrate through an application to publicly available mortgage data. This machine learning application predicts whether a mortgage is approved. We show how the explainability of machine learning can be improved by embedding machine learning models into domain knowledge from a conceptual model that represents a mental model of the real world, instead of algorithms. Our results suggest that such domain knowledge can help address some of the challenges of the explainability problem in AI.
When commenting on articles, please be friendly, welcoming, respectful and abide by the AIS eLibrary Discussion Thread Code of Conduct posted here.
Comments
13-DataAnalytics