Timezone: »
Poster
Multiple Descent: Design Your Own Generalization Curve
Lin Chen · Yifei Min · Mikhail Belkin · Amin Karbasi
This paper explores the generalization loss of linear regression in variably parameterized families of models, both under-parameterized and over-parameterized. We show that the generalization curve can have an arbitrary number of peaks, and moreover, the locations of those peaks can be explicitly controlled. Our results highlight the fact that both the classical U-shaped generalization curve and the recently observed double descent curve are not intrinsic properties of the model family. Instead, their emergence is due to the interaction between the properties of the data and the inductive biases of learning algorithms.
Author Information
Lin Chen (University of California, Berkeley)
Yifei Min (Yale University)
Mikhail Belkin (Ohio State University)
Amin Karbasi (Yale University)
More from the Same Authors
-
2022 : Exact Gradient Computation for Spiking Neural Networks »
Jane Lee · Saeid Haghighatshoar · Amin Karbasi -
2022 Poster: Submodular Maximization in Clean Linear Time »
Wenxin Li · Moran Feldman · Ehsan Kazemi · Amin Karbasi -
2022 Poster: Universal Rates for Interactive Learning »
Steve Hanneke · Amin Karbasi · Shay Moran · Grigoris Velegkas -
2022 Poster: Black-Box Generalization: Stability of Zeroth-Order Learning »
Konstantinos Nikolakakis · Farzin Haddadpour · Dionysis Kalogerias · Amin Karbasi -
2022 Poster: Reinforcement Learning with Logarithmic Regret and Policy Switches »
Grigoris Velegkas · Zhuoran Yang · Amin Karbasi -
2022 Poster: Multiclass Learnability Beyond the PAC Framework: Universal Rates and Partial Concept Classes »
Alkis Kalavasis · Grigoris Velegkas · Amin Karbasi -
2022 Poster: Fast Neural Kernel Embeddings for General Activations »
Insu Han · Amir Zandieh · Jaehoon Lee · Roman Novak · Lechao Xiao · Amin Karbasi -
2022 Poster: On Optimal Learning Under Targeted Data Poisoning »
Steve Hanneke · Amin Karbasi · Mohammad Mahmoody · Idan Mehalel · Shay Moran -
2021 Poster: An Exponential Improvement on the Memorization Capacity of Deep Threshold Networks »
Shashank Rajput · Kartik Sreenivasan · Dimitris Papailiopoulos · Amin Karbasi -
2021 Poster: Risk Bounds for Over-parameterized Maximum Margin Classification on Sub-Gaussian Mixtures »
Yuan Cao · Quanquan Gu · Mikhail Belkin -
2021 Poster: Variance-Aware Off-Policy Evaluation with Linear Function Approximation »
Yifei Min · Tianhao Wang · Dongruo Zhou · Quanquan Gu -
2021 Poster: Parallelizing Thompson Sampling »
Amin Karbasi · Vahab Mirrokni · Mohammad Shadravan -
2021 Poster: Submodular + Concave »
Siddharth Mitra · Moran Feldman · Amin Karbasi -
2020 Poster: Minimax Regret of Switching-Constrained Online Convex Optimization: No Phase Transition »
Lin Chen · Qian Yu · Hannah Lawrence · Amin Karbasi -
2018 Poster: Overfitting or perfect fitting? Risk bounds for classification and regression rules that interpolate »
Mikhail Belkin · Daniel Hsu · Partha P Mitra -
2017 Poster: Diving into the shallows: a computational perspective on large-scale shallow learning »
SIYUAN MA · Mikhail Belkin -
2017 Spotlight: Diving into the shallows: a computational perspective on large-scale shallow learning »
SIYUAN MA · Mikhail Belkin -
2016 Poster: Graphons, mergeons, and so on! »
Justin Eldridge · Mikhail Belkin · Yusu Wang -
2016 Oral: Graphons, mergeons, and so on! »
Justin Eldridge · Mikhail Belkin · Yusu Wang -
2016 Poster: Clustering with Bregman Divergences: an Asymptotic Analysis »
Chaoyue Liu · Mikhail Belkin -
2015 Poster: A Pseudo-Euclidean Iteration for Optimal Recovery in Noisy ICA »
James R Voss · Mikhail Belkin · Luis Rademacher -
2014 Poster: Learning with Fredholm Kernels »
Qichao Que · Mikhail Belkin · Yusu Wang -
2013 Workshop: Modern Nonparametric Methods in Machine Learning »
Arthur Gretton · Mladen Kolar · Samory Kpotufe · John Lafferty · Han Liu · Bernhard Schölkopf · Alexander Smola · Rob Nowak · Mikhail Belkin · Lorenzo Rosasco · peter bickel · Yue Zhao -
2013 Poster: Inverse Density as an Inverse Problem: the Fredholm Equation Approach »
Qichao Que · Mikhail Belkin -
2013 Poster: Fast Algorithms for Gaussian Noise Invariant Independent Component Analysis »
James R Voss · Luis Rademacher · Mikhail Belkin -
2013 Spotlight: Inverse Density as an Inverse Problem: the Fredholm Equation Approach »
Qichao Que · Mikhail Belkin -
2013 Poster: Noise-Enhanced Associative Memories »
Amin Karbasi · Amir Hesam Salavati · Amin Shokrollahi · Lav R Varshney -
2013 Poster: Distributed Submodular Maximization: Identifying Representative Elements in Massive Data »
Baharan Mirzasoleiman · Amin Karbasi · Rik Sarkar · Andreas Krause -
2013 Spotlight: Noise-Enhanced Associative Memories »
Amin Karbasi · Amir Hesam Salavati · Amin Shokrollahi · Lav R Varshney -
2011 Poster: Data Skeletonization via Reeb Graphs »
Xiaoyin Ge · Issam I Safa · Mikhail Belkin · Yusu Wang -
2009 Poster: Semi-supervised Learning using Sparse Eigenfunction Bases »
Kaushik Sinha · Mikhail Belkin -
2007 Spotlight: The Value of Labeled and Unlabeled Examples when the Model is Imperfect »
Kaushik Sinha · Mikhail Belkin -
2007 Poster: The Value of Labeled and Unlabeled Examples when the Model is Imperfect »
Kaushik Sinha · Mikhail Belkin -
2006 Poster: On the Relation Between Low Density Separation, Spectral Clustering and Graph Cuts »
Hariharan Narayanan · Mikhail Belkin · Partha Niyogi -
2006 Poster: Convergence of Laplacian Eigenmaps »
Mikhail Belkin · Partha Niyogi