Loading...
Paper Type
Complete
Abstract
Explainability allows end-users to have a transparent and humane reckoning of an ML scheme's capability and utility. ML model's modus opernadi can be explained via the features which trained it. To this end, we found no work explaining the features' importance based on their class-distinguishing abilities. In a given dataset, a feature is not equally good at distinguishing between the data points' possible categorizations (or classes). This work explains the features based on their class or category-distinguishing capabilities. We estimate the variables' class-distinguishing capabilities (scores) for pair-wise class combinations, utilize them in a missing feature context, and propose a novel decision-making protocol. A key novelty of this work lies in the refusal to render a decision option when the missing feature (of the test point) has a high class-distinguishing potential for the likely classes. Two real-world datasets are used empirically to validate the explainability of our scheme.
Paper Number
1111
Recommended Citation
Sadhukhan, Payel; Sengupta, Kausik; Palit, Sarbani; and Chakraborty, Tanujit, "Knowing the class distinguishing abilities of the features, to build better decision-making models" (2024). AMCIS 2024 Proceedings. 21.
https://aisel.aisnet.org/amcis2024/dsa/dsa/21
Knowing the class distinguishing abilities of the features, to build better decision-making models
Explainability allows end-users to have a transparent and humane reckoning of an ML scheme's capability and utility. ML model's modus opernadi can be explained via the features which trained it. To this end, we found no work explaining the features' importance based on their class-distinguishing abilities. In a given dataset, a feature is not equally good at distinguishing between the data points' possible categorizations (or classes). This work explains the features based on their class or category-distinguishing capabilities. We estimate the variables' class-distinguishing capabilities (scores) for pair-wise class combinations, utilize them in a missing feature context, and propose a novel decision-making protocol. A key novelty of this work lies in the refusal to render a decision option when the missing feature (of the test point) has a high class-distinguishing potential for the likely classes. Two real-world datasets are used empirically to validate the explainability of our scheme.
When commenting on articles, please be friendly, welcoming, respectful and abide by the AIS eLibrary Discussion Thread Code of Conduct posted here.
Comments
SIGDSA