Timezone: »
Neural network training relies on our ability to find "good" minimizers of highly non-convex loss functions. It is well known that certain network architecture designs (e.g., skip connections) produce loss functions that train easier, and well-chosen training parameters (batch size, learning rate, optimizer) produce minimizers that generalize better. However, the reasons for these differences, and their effect on the underlying loss landscape, is not well understood. In this paper, we explore the structure of neural loss functions, and the effect of loss landscapes on generalization, using a range of visualization methods. First, we introduce a simple "filter normalization" method that helps us visualize loss function curvature, and make meaningful side-by-side comparisons between loss functions. Then, using a variety of visualizations, we explore how network architecture affects the loss landscape, and how training parameters affect the shape of minimizers.
Author Information
Hao Li (Amazon Web Services)
Zheng Xu (University of Maryland, College Park)
Gavin Taylor (US Naval Academy)
Christoph Studer (Cornell University)
Tom Goldstein (University of Maryland)
More from the Same Authors
-
2020 : An Open Review of OpenReview: A Critical Analysis of the Machine Learning Conference Review Process »
David Tran · Alex Valtchanov · Keshav R Ganapathy · Raymond Feng · Eric Slud · Micah Goldblum · Tom Goldstein -
2021 : Execute Order 66: Targeted Data Poisoning for Reinforcement Learning via Minuscule Perturbations »
Harrison Foley · Liam Fowl · Tom Goldstein · Gavin Taylor -
2021 : Diurnal or Nocturnal? Federated Learning from Periodically Shifting Distributions »
Chen Zhu · Zheng Xu · Mingqing Chen · Jakub Konečný · Andrew S Hard · Tom Goldstein -
2021 Poster: GradInit: Learning to Initialize Neural Networks for Stable and Efficient Training »
Chen Zhu · Renkun Ni · Zheng Xu · Kezhi Kong · W. Ronny Huang · Tom Goldstein -
2020 : The Intrinsic Dimension of Images and Its Impact on Learning »
Chen Zhu · Micah Goldblum · Ahmed Abdelkader · Tom Goldstein · Phillip Pope -
2020 Workshop: Workshop on Dataset Curation and Security »
Nathalie Baracaldo Angel · Yonatan Bisk · Avrim Blum · Michael Curry · John Dickerson · Micah Goldblum · Tom Goldstein · Bo Li · Avi Schwarzschild -
2020 Poster: Detection as Regression: Certified Object Detection with Median Smoothing »
Ping-yeh Chiang · Michael Curry · Ahmed Abdelkader · Aounon Kumar · John Dickerson · Tom Goldstein -
2020 Poster: Certifying Confidence via Randomized Smoothing »
Aounon Kumar · Alexander Levine · Soheil Feizi · Tom Goldstein -
2020 Poster: Adversarially Robust Few-Shot Learning: A Meta-Learning Approach »
Micah Goldblum · Liam Fowl · Tom Goldstein -
2020 Poster: MetaPoison: Practical General-purpose Clean-label Data Poisoning »
W. Ronny Huang · Jonas Geiping · Liam Fowl · Gavin Taylor · Tom Goldstein -
2020 Poster: Certifying Strategyproof Auction Networks »
Michael Curry · Ping-yeh Chiang · Tom Goldstein · John Dickerson -
2019 Poster: Adversarial training for free! »
Ali Shafahi · Mahyar Najibi · Mohammad Amin Ghiasi · Zheng Xu · John Dickerson · Christoph Studer · Larry Davis · Gavin Taylor · Tom Goldstein -
2018 Poster: Poison Frogs! Targeted Clean-Label Poisoning Attacks on Neural Networks »
Ali Shafahi · W. Ronny Huang · Mahyar Najibi · Octavian Suciu · Christoph Studer · Tudor Dumitras · Tom Goldstein -
2017 Poster: Training Quantized Nets: A Deeper Understanding »
Hao Li · Soham De · Zheng Xu · Christoph Studer · Hanan Samet · Tom Goldstein -
2016 Workshop: Machine Learning for Education »
Richard Baraniuk · Jiquan Ngiam · Christoph Studer · Phillip Grimaldi · Andrew Lan -
2015 : Spotlight »
Furong Huang · William Gray Roncal · Tom Goldstein -
2015 Poster: Adaptive Primal-Dual Splitting Methods for Statistical Learning and Image Processing »
Tom Goldstein · Min Li · Xiaoming Yuan -
2014 Workshop: Human Propelled Machine Learning »
Richard Baraniuk · Michael Mozer · Divyanshu Vats · Christoph Studer · Andrew E Waters · Andrew Lan