Abstract

The evolution regarding Intelligent Systems Development, especially the advances in the field of Deep Learning Networks facilitate the design of complex prediction systems in multiple fields like image recognition or time series prediction spanning over different sectors like manufacturing or service industry. The accuracy achieved in those complex systems comes at the price of total in non-transparency of model results. In order to tackle that research gap we propose a systematic scheme that proposes several methods for interpreting and visualizing deep neural network results. We then characterize the methods and give a comprehensive overview over the current state of the art and provide limitations and further research proposals.

Share

COinS
 

Demystifying the Black Box: A Classification Scheme for Interpretation and Visualization of Deep Intelligent Systems

The evolution regarding Intelligent Systems Development, especially the advances in the field of Deep Learning Networks facilitate the design of complex prediction systems in multiple fields like image recognition or time series prediction spanning over different sectors like manufacturing or service industry. The accuracy achieved in those complex systems comes at the price of total in non-transparency of model results. In order to tackle that research gap we propose a systematic scheme that proposes several methods for interpreting and visualizing deep neural network results. We then characterize the methods and give a comprehensive overview over the current state of the art and provide limitations and further research proposals.