Timezone: »
Fanny Yang: Surprising failures of standard practices in ML when the sample size is small.
Fanny Yang
Sat Dec 03 01:30 PM -- 01:55 PM (PST) @
In this talk, we discuss two failure cases of common practices that are typically believed to improve on vanilla methods: (i) adversarial training can lead to worse robust accuracy than standard training (ii) active learning can lead to a worse classifier than a model trained using uniform samples. In particular, we can prove both mathematically and empirically, that such failures can happen in the small-sample regime. We discuss high-level explanations derived from the theory, that shed light on the causes of these phenomena in practice.
Author Information
Fanny Yang (ETH Zurich)
More from the Same Authors
-
2022 : Certified defences hurt generalisation »
Piersilvio De Bartolomeis · Jacob Clarysse · Fanny Yang · Amartya Sanyal -
2022 : Certified defences hurt generalisation »
Piersilvio De Bartolomeis · Jacob Clarysse · Fanny Yang · Amartya Sanyal -
2023 Poster: Can semi-supervised learning use all the data effectively? A lower bound perspective »
Gizem Yüce · Alexandru Tifrea · Amartya Sanyal · Fanny Yang -
2023 Workshop: Workshop on Distribution Shifts: New Frontiers with Foundation Models »
Rebecca Roelofs · Fanny Yang · Hongseok Namkoong · Masashi Sugiyama · Jacob Eisenstein · Pang Wei Koh · Shiori Sagawa · Tatsunori Hashimoto · Yoonho Lee -
2022 : Fanny Yang: Surprising failures of standard practices in ML when the sample size is small. »
Fanny Yang -
2022 Workshop: Workshop on Distribution Shifts: Connecting Methods and Applications »
Chelsea Finn · Fanny Yang · Hongseok Namkoong · Masashi Sugiyama · Jacob Eisenstein · Jonas Peters · Rebecca Roelofs · Shiori Sagawa · Pang Wei Koh · Yoonho Lee -
2021 Workshop: Distribution shifts: connecting methods and applications (DistShift) »
Shiori Sagawa · Pang Wei Koh · Fanny Yang · Hongseok Namkoong · Jiashi Feng · Kate Saenko · Percy Liang · Sarah Bird · Sergey Levine