Paper Type

Complete

Abstract

Explainability allows end-users to have a transparent and humane reckoning of an ML scheme's capability and utility. ML model's modus opernadi can be explained via the features which trained it. To this end, we found no work explaining the features' importance based on their class-distinguishing abilities. In a given dataset, a feature is not equally good at distinguishing between the data points' possible categorizations (or classes). This work explains the features based on their class or category-distinguishing capabilities. We estimate the variables' class-distinguishing capabilities (scores) for pair-wise class combinations, utilize them in a missing feature context, and propose a novel decision-making protocol. A key novelty of this work lies in the refusal to render a decision option when the missing feature (of the test point) has a high class-distinguishing potential for the likely classes. Two real-world datasets are used empirically to validate the explainability of our scheme.

Paper Number

1111

Comments

SIGDSA

Share

COinS
 
Aug 16th, 12:00 AM

Knowing the class distinguishing abilities of the features, to build better decision-making models

Explainability allows end-users to have a transparent and humane reckoning of an ML scheme's capability and utility. ML model's modus opernadi can be explained via the features which trained it. To this end, we found no work explaining the features' importance based on their class-distinguishing abilities. In a given dataset, a feature is not equally good at distinguishing between the data points' possible categorizations (or classes). This work explains the features based on their class or category-distinguishing capabilities. We estimate the variables' class-distinguishing capabilities (scores) for pair-wise class combinations, utilize them in a missing feature context, and propose a novel decision-making protocol. A key novelty of this work lies in the refusal to render a decision option when the missing feature (of the test point) has a high class-distinguishing potential for the likely classes. Two real-world datasets are used empirically to validate the explainability of our scheme.

When commenting on articles, please be friendly, welcoming, respectful and abide by the AIS eLibrary Discussion Thread Code of Conduct posted here.