Timezone: »
As annotations of data can be scarce in large-scale practical problems, leveraging unlabelled examples is one of the most important aspects of machine learning. This is the aim of semi-supervised learning. To benefit from the access to unlabelled data, it is natural to diffuse smoothly knowledge of labelled data to unlabelled one. This induces to the use of Laplacian regularization. Yet, current implementations of Laplacian regularization suffer from several drawbacks, notably the well-known curse of dimensionality. In this paper, we design a new class of algorithms overcoming this issue, unveiling a large body of spectral filtering methods. Additionally, we provide a statistical analysis showing that our estimators exhibit desirable behaviors. They are implemented through (reproducing) kernel methods, for which we provide realistic computational guidelines in order to make our method usable with large amounts of data.
Author Information
Vivien Cabannes (Ecole Normale Supérieure)
Loucas Pillaud-Vivien (EPFL)
Francis Bach (INRIA - Ecole Normale Superieure)
Alessandro Rudi (INRIA, Ecole Normale Superieure)
More from the Same Authors
-
2020 : Diptychs of human and machine perceptions »
Vivien Cabannes -
2021 Spotlight: Mixability made efficient: Fast online multiclass logistic regression »
Rémi Jézéquel · Pierre Gaillard · Alessandro Rudi -
2021 Spotlight: Batch Normalization Orthogonalizes Representations in Deep Random Networks »
Hadi Daneshmand · Amir Joudaki · Francis Bach -
2021 Spotlight: Beyond Tikhonov: faster learning with self-concordant losses, via iterative regularization »
Gaspard Beugnot · Julien Mairal · Alessandro Rudi -
2022 Poster: Gradient flow dynamics of shallow ReLU networks for square loss and orthogonal inputs »
Etienne Boursier · Loucas PILLAUD-VIVIEN · Nicolas Flammarion -
2022 Poster: Active Labeling: Streaming Stochastic Gradients »
Vivien Cabannes · Francis Bach · Vianney Perchet · Alessandro Rudi -
2021 Test Of Time: Online Learning for Latent Dirichlet Allocation »
Matthew Hoffman · Francis Bach · David Blei -
2021 Poster: Implicit Bias of SGD for Diagonal Linear Networks: a Provable Benefit of Stochasticity »
Scott Pesme · Loucas Pillaud-Vivien · Nicolas Flammarion -
2021 Poster: Last iterate convergence of SGD for Least-Squares in the Interpolation regime. »
Aditya Vardhan Varre · Loucas Pillaud-Vivien · Nicolas Flammarion -
2021 Poster: Mixability made efficient: Fast online multiclass logistic regression »
Rémi Jézéquel · Pierre Gaillard · Alessandro Rudi -
2021 Poster: PSD Representations for Effective Probability Models »
Alessandro Rudi · Carlo Ciliberto -
2021 Oral: Continuized Accelerations of Deterministic and Stochastic Gradient Descents, and of Gossip Algorithms »
Mathieu Even · Raphaël Berthier · Francis Bach · Nicolas Flammarion · Hadrien Hendrikx · Pierre Gaillard · Laurent Massoulié · Adrien Taylor -
2021 Poster: Batch Normalization Orthogonalizes Representations in Deep Random Networks »
Hadi Daneshmand · Amir Joudaki · Francis Bach -
2021 Poster: Continuized Accelerations of Deterministic and Stochastic Gradient Descents, and of Gossip Algorithms »
Mathieu Even · Raphaël Berthier · Francis Bach · Nicolas Flammarion · Hadrien Hendrikx · Pierre Gaillard · Laurent Massoulié · Adrien Taylor -
2021 Poster: Beyond Tikhonov: faster learning with self-concordant losses, via iterative regularization »
Gaspard Beugnot · Julien Mairal · Alessandro Rudi -
2019 : Poster Session 2 »
Mayur Saxena · Nicholas Frosst · Vivien Cabannes · Gene Kogan · Austin Dill · Anurag Sarkar · Joel Ruben Antony Moniz · Vibert Thio · Scott Sievert · Lia Coleman · Frederik De Bleser · Brian Quanz · Jonathon Kereliuk · Panos Achlioptas · Mohamed Elhoseiny · Songwei Ge · Aidan Gomez · Jamie Brew -
2018 Poster: Statistical Optimality of Stochastic Gradient Descent on Hard Learning Problems through Multiple Passes »
Loucas Pillaud-Vivien · Alessandro Rudi · Francis Bach -
2017 Poster: Generalization Properties of Learning with Random Features »
Alessandro Rudi · Lorenzo Rosasco -
2017 Oral: Generalization Properties of Learning with Random Features »
Alessandro Rudi · Lorenzo Rosasco -
2017 Poster: Consistent Multitask Learning with Nonlinear Output Relations »
Carlo Ciliberto · Alessandro Rudi · Lorenzo Rosasco · Massimiliano Pontil -
2017 Poster: FALKON: An Optimal Large Scale Kernel Method »
Alessandro Rudi · Luigi Carratino · Lorenzo Rosasco -
2016 Poster: A Consistent Regularization Approach for Structured Prediction »
Carlo Ciliberto · Lorenzo Rosasco · Alessandro Rudi -
2015 Poster: Less is More: Nyström Computational Regularization »
Alessandro Rudi · Raffaello Camoriano · Lorenzo Rosasco -
2015 Oral: Less is More: Nyström Computational Regularization »
Alessandro Rudi · Raffaello Camoriano · Lorenzo Rosasco -
2015 Poster: Rethinking LDA: Moment Matching for Discrete ICA »
Anastasia Podosinnikova · Francis Bach · Simon Lacoste-Julien -
2015 Poster: Spectral Norm Regularization of Orthonormal Representations for Graph Transduction »
Rakesh Shivanna · Bibaswan K Chatterjee · Raman Sankaran · Chiranjib Bhattacharyya · Francis Bach