Skip to yearly menu bar Skip to main content


Poster

Reproducibility Study of "Learning Perturbations to Explain Time Series Predictions"

Luke Cadigan · Paulius Skaisgiris · Sebastian U. Arias · Jiapeng Fan

[ ] [ Project Page ]
Thu 12 Dec 4:30 p.m. PST — 7:30 p.m. PST

Abstract:

In this work, we attempt to reproduce the results of Enguehard (2023), which introduced ExtremalMask, a mask-based perturbation method for explaining time series data. We investigated the key claims of this paper, namely that (1) the model outperformed other models in several key metrics on both synthetic and real data, and (2) the model performed better when using the loss function of the preservation game relative to that of the deletion game. Although discrepancies exist, our results generally support the core of the original paper’s conclusions. Next, we interpret ExtremalMask’s outputs using new visualizations and metrics and discuss the insights each interpretation provides. Finally, we test whether ExtremalMask create out of distribution samples, and found the model does not exhibit this flaw on our tested synthetic dataset. Overall, our results support and add nuance to the original paper’s findings.

Live content is unavailable. Log in and register to view live content