Skip to yearly menu bar Skip to main content


Invited Talk
in
Workshop: I Can’t Believe It’s Not Better: Understanding Deep Learning Through Empirical Falsification

Fanny Yang: Surprising failures of standard practices in ML when the sample size is small.

Fanny Yang


Abstract:

In this talk, we discuss two failure cases of common practices that are typically believed to improve on vanilla methods: (i) adversarial training can lead to worse robust accuracy than standard training (ii) active learning can lead to a worse classifier than a model trained using uniform samples. In particular, we can prove both mathematically and empirically, that such failures can happen in the small-sample regime. We discuss high-level explanations derived from the theory, that shed light on the causes of these phenomena in practice.

Chat is not available.