Skip to yearly menu bar Skip to main content


Poster

The Memory-Perturbation Equation: Understanding Model's Sensitivity to Data

Peter Nickl · Lu Xu · Dharmesh Tailor · Thomas Möllenhoff · Mohammad Emtiyaz Khan

Great Hall & Hall B1+B2 (level 1) #1310
[ ]
[ Paper [ Poster [ OpenReview
Wed 13 Dec 3 p.m. PST — 5 p.m. PST

Abstract:

Understanding model’s sensitivity to its training data is crucial but can also be challenging and costly, especially during training. To simplify such issues, we present the Memory-Perturbation Equation (MPE) which relates model's sensitivity to perturbation in its training data. Derived using Bayesian principles, the MPE unifies existing sensitivity measures, generalizes them to a wide-variety of models and algorithms, and unravels useful properties regarding sensitivities. Our empirical results show that sensitivity estimates obtained during training can be used to faithfully predict generalization on unseen test data. The proposed equation is expected to be useful for future research on robust and adaptive learning.

Chat is not available.