Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Learning from Time Series for Health

Improving Counterfactual Explanations for Time Series Classification Models in Healthcare Settings

Tina Han · Jette Henderson · Pedram Akbarian Saravi · Joydeep Ghosh


Abstract:

Explanations of machine learning models' decisions can help build trust as well as identify and isolate unexpected model behavior. Time series data, abundant in medical applications, and their associated classifiers pose a particularly difficult explainability problem due to the inherent feature dependency that results in complex modeling decisions and assumptions. Counterfactual explanations for a given time series tells the user how the input to the model needs to change in order to receive a different class prediction from the classifier. While a few methods for generating counterfactual explanations for time series have been proposed, the needs of simplicity and plausibilty have been overlooked. In this paper, we propose an easily understood method to generate realistic counterfactual explanations for any black box time series model. Our method, Shapelet-Guided Realistic Counterfactual Explanation Generation for Black-Box Time Series Classifiers (SGRCEG), grounds the search for counterfactual explanations in shapelets, which are discriminatory subsequences in time series.SGRCEG greedily constructs counterfactual explanations based on shapelets. Additionally, SGRCEG also employs a realism check, so the likelihood of producing a counterfactual that is not plausible is minimized. Using SGRCEG, model developers as well as medical practitioners can better understand the decisions of their models.

Chat is not available.