Abstract

Machine Learning (ML) models have become ubiquitous in all spheres of research and decision-making. Understanding ML models as well as the data-generation-process (DGP) for the dataset under examination are important. Most highly accurate ML models are blackboxes that aren't interpretable. In this work, we propose a methodology that can help elicit important information from any ML models. Our methodology allows the use of any highly-accurate ML model to find interactions between variables in the dataset. This can allow for a better understanding of the underlying DGP by using a data-and-model agnostic process to synthesize new knowledge about the underlying phenomenon.

Share

COinS