Timezone: »

Investigating Reproducibility from the Decision Boundary Perspective.
Gowthami Somepalli · Arpit Bansal · Liam Fowl · Ping-yeh Chiang · Yehuda Dar · Richard Baraniuk · Micah Goldblum · Tom Goldstein

The superiority of neural networks over classical linear classifiers stems from their ability to slice image space into complex class regions. While neural network training is certainly not well understood, existing theories of neural network training primarily focus on understanding the geometry of loss landscapes. Meanwhile, considerably less is known about the geometry of class boundaries. The geometry of these regions depends strongly on the inductive bias of neural network models, which we do not currently have the tools to analyze rigorously. In this study, we use empirical tools to study the geometry of class regions and try to answer the question - Do neural networks produce decision boundaries that are consistent across random initializations? Do different neural architectures have measurable differences in inductive bias?

Author Information

Gowthami Somepalli (University of Maryland, College Park)
Arpit Bansal (University of Maryland, College Park)
Liam Fowl (University of Maryland)
Ping-yeh Chiang (University of Maryland, College Park)
Yehuda Dar (Rice University)
Richard Baraniuk (Rice University)
Micah Goldblum (University of Maryland)
Tom Goldstein (University of Maryland)

More from the Same Authors