Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Table Representation Learning Workshop

Explaining Explainers: Necessity and Sufficiency in Tabular Data

Prithwijit Chowdhury · Mohit Prabhushankar · Ghassan AlRegib

Keywords: [ Explainable AI ] [ counterfactuals ] [ machine learning ] [ ML Algorithms ] [ Trust in AI ] [ Classification ] [ High dimensional data ] [ XAI ]


Abstract:

In recent days, ML classifiers trained on tabular data are used to make efficient and fast decisions for various decision-making tasks. The lack of transparency in the decision-making processes of these models have led to the emergence of EXplainable AI (XAI). However, discrepancies exist among XAI programs, raising concerns about their accuracy. The notion of what an “important" and “relevant" feature is, is different for different explanation strategies. Thus grounding them using theoretically backed ideas of necessity and sufficiency can prove to be a reliable way to increase their trustworthiness. We propose a novel approach to quantify these two concepts in order to provide a means to explore which explanation method might be suitable for tasks involving the implementation of sparse high dimensional tabular datasets. Moreover, our global necessity and sufficiency scores aim to help experts to correlate their domain knowledge with our findings and also allow an extra basis for evaluation of the results provided by popular local explanation methods like LIME and SHAP.

Chat is not available.