Timezone: »
The theory of spectral filtering is a remarkable tool to understand the statistical properties of learning with kernels. For least squares, it allows to derive various regularization schemes that yield faster convergence rates of the excess risk than with Tikhonov regularization. This is typically achieved by leveraging classical assumptions called source and capacity conditions, which characterize the difficulty of the learning task. In order to understand estimators derived from other loss functions, Marteau-Ferey et al. have extended the theory of Tikhonov regularization to generalized self concordant loss functions (GSC), which contain, e.g., the logistic loss. In this paper, we go a step further and show that fast and optimal rates can be achieved for GSC by using the iterated Tikhonov regularization scheme, which is intrinsically related to the proximal point method in optimization, and overcomes the limitation of the classical Tikhonov regularization.
Author Information
Gaspard Beugnot (INRIA)
Julien Mairal (Inria)
Alessandro Rudi (INRIA, Ecole Normale Superieure)
Related Events (a corresponding poster, oral, or spotlight)
-
2021 Spotlight: Beyond Tikhonov: faster learning with self-concordant losses, via iterative regularization »
Dates n/a. Room
More from the Same Authors
-
2021 Spotlight: Mixability made efficient: Fast online multiclass logistic regression »
Rémi Jézéquel · Pierre Gaillard · Alessandro Rudi -
2022 Poster: Non-Convex Bilevel Games with Critical Point Selection Maps »
Michael Arbel · Julien Mairal -
2022 Poster: Active Labeling: Streaming Stochastic Gradients »
Vivien Cabannes · Francis Bach · Vianney Perchet · Alessandro Rudi -
2021 Poster: Mixability made efficient: Fast online multiclass logistic regression »
Rémi Jézéquel · Pierre Gaillard · Alessandro Rudi -
2021 Poster: Overcoming the curse of dimensionality with Laplacian regularization in semi-supervised learning »
Vivien Cabannes · Loucas Pillaud-Vivien · Francis Bach · Alessandro Rudi -
2021 Poster: PSD Representations for Effective Probability Models »
Alessandro Rudi · Carlo Ciliberto -
2021 Poster: A Trainable Spectral-Spatial Sparse Coding Model for Hyperspectral Image Restoration »
Theo Bodrito · Alexandre Zouaoui · Jocelyn Chanussot · Julien Mairal -
2020 Poster: Unsupervised Learning of Visual Features by Contrasting Cluster Assignments »
Mathilde Caron · Ishan Misra · Julien Mairal · Priya Goyal · Piotr Bojanowski · Armand Joulin -
2020 Poster: A Flexible Framework for Designing Trainable Priors with Adaptive Smoothing and Game Encoding »
Bruno Lecouat · Jean Ponce · Julien Mairal -
2020 : Discussion Panel: Hugo Larochelle, Finale Doshi-Velez, Devi Parikh, Marc Deisenroth, Julien Mairal, Katja Hofmann, Phillip Isola, and Michael Bowling »
Hugo Larochelle · Finale Doshi-Velez · Marc Deisenroth · Devi Parikh · Julien Mairal · Katja Hofmann · Phillip Isola · Michael Bowling -
2019 Poster: On the Inductive Bias of Neural Tangent Kernels »
Alberto Bietti · Julien Mairal -
2019 Poster: Recurrent Kernel Networks »
Dexiong Chen · Laurent Jacob · Julien Mairal -
2019 Poster: A Generic Acceleration Framework for Stochastic Composite Optimization »
Andrei Kulunchakov · Julien Mairal -
2018 Poster: Unsupervised Learning of Artistic Styles with Archetypal Style Analysis »
Daan Wynen · Cordelia Schmid · Julien Mairal -
2017 Poster: Stochastic Optimization with Variance Reduction for Infinite Datasets with Finite Sum Structure »
Alberto Bietti · Julien Mairal -
2017 Spotlight: Stochastic Optimization with Variance Reduction for Infinite Datasets with Finite Sum Structure »
Alberto Bietti · Julien Mairal -
2017 Poster: Generalization Properties of Learning with Random Features »
Alessandro Rudi · Lorenzo Rosasco -
2017 Poster: Learning Neural Representations of Human Cognition across Many fMRI Studies »
Arthur Mensch · Julien Mairal · Danilo Bzdok · Bertrand Thirion · Gael Varoquaux -
2017 Oral: Generalization Properties of Learning with Random Features »
Alessandro Rudi · Lorenzo Rosasco -
2017 Poster: Consistent Multitask Learning with Nonlinear Output Relations »
Carlo Ciliberto · Alessandro Rudi · Lorenzo Rosasco · Massimiliano Pontil -
2017 Poster: FALKON: An Optimal Large Scale Kernel Method »
Alessandro Rudi · Luigi Carratino · Lorenzo Rosasco -
2017 Poster: Invariance and Stability of Deep Convolutional Representations »
Alberto Bietti · Julien Mairal -
2016 Poster: A Consistent Regularization Approach for Structured Prediction »
Carlo Ciliberto · Lorenzo Rosasco · Alessandro Rudi -
2016 Poster: End-to-End Kernel Learning with Supervised Convolutional Kernel Networks »
Julien Mairal -
2015 Poster: Less is More: Nyström Computational Regularization »
Alessandro Rudi · Raffaello Camoriano · Lorenzo Rosasco -
2015 Oral: Less is More: Nyström Computational Regularization »
Alessandro Rudi · Raffaello Camoriano · Lorenzo Rosasco -
2015 Poster: A Universal Catalyst for First-Order Optimization »
Hongzhou Lin · Julien Mairal · Zaid Harchaoui -
2014 Poster: Convolutional Kernel Networks »
Julien Mairal · Piotr Koniusz · Zaid Harchaoui · Cordelia Schmid -
2014 Spotlight: Convolutional Kernel Networks »
Julien Mairal · Piotr Koniusz · Zaid Harchaoui · Cordelia Schmid -
2013 Poster: Stochastic Majorization-Minimization Algorithms for Large-Scale Optimization »
Julien Mairal -
2010 Poster: Network Flow Algorithms for Structured Sparsity »
Julien Mairal · Rodolphe Jenatton · Guillaume R Obozinski · Francis Bach -
2008 Poster: SDL: Supervised Dictionary Learning »
Julien Mairal · Francis Bach · Jean A Ponce · Guillermo Sapiro · Andrew Zisserman