Timezone: »
Poster
Information-theoretic generalization bounds for black-box learning algorithms
Hrayr Harutyunyan · Maxim Raginsky · Greg Ver Steeg · Aram Galstyan
We derive information-theoretic generalization bounds for supervised learning algorithms based on the information contained in predictions rather than in the output of the training algorithm. These bounds improve over the existing information-theoretic bounds, are applicable to a wider range of algorithms, and solve two key challenges: (a) they give meaningful results for deterministic algorithms and (b) they are significantly easier to estimate. We show experimentally that the proposed bounds closely follow the generalization gap in practical scenarios for deep learning.
Author Information
Hrayr Harutyunyan (USC Information Sciences Institute)
Maxim Raginsky (University of Illinois at Urbana-Champaign)
Greg Ver Steeg (USC Information Sciences Institute)
Aram Galstyan (USC Information Sciences Institute)
More from the Same Authors
-
2022 : Federated Progressive Sparsification (Purge-Merge-Tune)+ »
Dimitris Stripelis · Umang Gupta · Greg Ver Steeg · Jose-Luis Ambite -
2022 : Bounding the Effects of Continuous Treatments for Hidden Confounders »
Myrl Marmarelis · Greg Ver Steeg · Neda Jahanshad · Aram Galstyan -
2023 Poster: A unified framework for information-theoretic generalization bounds »
Yifeng Chu · Maxim Raginsky -
2021 Poster: Hamiltonian Dynamics with Non-Newtonian Momentum for Rapid Sampling »
Greg Ver Steeg · Aram Galstyan -
2021 Poster: Implicit SVD for Graph Representation Learning »
Sami Abu-El-Haija · Hesham Mostafa · Marcel Nassar · Valentino Crespi · Greg Ver Steeg · Aram Galstyan -
2020 Workshop: Deep Learning through Information Geometry »
Pratik Chaudhari · Alexander Alemi · Varun Jog · Dhagash Mehta · Frank Nielsen · Stefano Soatto · Greg Ver Steeg -
2019 : Poster Session »
Gergely Flamich · Shashanka Ubaru · Charles Zheng · Josip Djolonga · Kristoffer Wickstrøm · Diego Granziol · Konstantinos Pitas · Jun Li · Robert Williamson · Sangwoong Yoon · Kwot Sin Lee · Julian Zilly · Linda Petrini · Ian Fischer · Zhe Dong · Alexander Alemi · Bao-Ngoc Nguyen · Rob Brekelmans · Tailin Wu · Aditya Mahajan · Alexander Li · Kirankumar Shiragur · Yair Carmon · Linara Adilova · SHIYU LIU · Bang An · Sanjeeb Dash · Oktay Gunluk · Arya Mazumdar · Mehul Motani · Julia Rosenzweig · Michael Kamp · Marton Havasi · Leighton P Barnes · Zhengqing Zhou · Yi Hao · Dylan Foster · Yuval Benjamini · Nati Srebro · Michael Tschannen · Paul Rubenstein · Sylvain Gelly · John Duchi · Aaron Sidford · Robin Ru · Stefan Zohren · Murtaza Dalal · Michael A Osborne · Stephen J Roberts · Moses Charikar · Jayakumar Subramanian · Xiaodi Fan · Max Schwarzer · Nicholas Roberts · Simon Lacoste-Julien · Vinay Prabhu · Aram Galstyan · Greg Ver Steeg · Lalitha Sankar · Yung-Kyun Noh · Gautam Dasarathy · Frank Park · Ngai-Man (Man) Cheung · Ngoc-Trung Tran · Linxiao Yang · Ben Poole · Andrea Censi · Tristan Sylvain · R Devon Hjelm · Bangjie Liu · Jose Gallego-Posada · Tyler Sypherd · Kai Yang · Jan Nikolas Morshuis -
2019 Poster: Fast structure learning with modular regularization »
Greg Ver Steeg · Hrayr Harutyunyan · Daniel Moyer · Aram Galstyan -
2019 Spotlight: Fast structure learning with modular regularization »
Greg Ver Steeg · Hrayr Harutyunyan · Daniel Moyer · Aram Galstyan -
2019 Poster: Exact Rate-Distortion in Autoencoders via Echo Noise »
Rob Brekelmans · Daniel Moyer · Aram Galstyan · Greg Ver Steeg -
2019 Poster: Universal Approximation of Input-Output Maps by Temporal Convolutional Nets »
Joshua Hanson · Maxim Raginsky -
2018 Poster: Invariant Representations without Adversarial Training »
Daniel Moyer · Shuyang Gao · Rob Brekelmans · Aram Galstyan · Greg Ver Steeg -
2018 Poster: Minimax Statistical Learning with Wasserstein distances »
Jaeho Lee · Maxim Raginsky -
2018 Spotlight: Minimax Statistical Learning with Wasserstein distances »
Jaeho Lee · Maxim Raginsky -
2017 : Coffee break and Poster Session II »
Mohamed Kane · Albert Haque · Vagelis Papalexakis · John Guibas · Peter Li · Carlos Arias · Eric Nalisnick · Padhraic Smyth · Frank Rudzicz · Xia Zhu · Theodore Willke · Noemie Elhadad · Hans Raffauf · Harini Suresh · Paroma Varma · Yisong Yue · Ognjen (Oggi) Rudovic · Luca Foschini · Syed Rameel Ahmad · Hasham ul Haq · Valerio Maggio · Giuseppe Jurman · Sonali Parbhoo · Pouya Bashivan · Jyoti Islam · Mirco Musolesi · Chris Wu · Alexander Ratner · Jared Dunnmon · Cristóbal Esteban · Aram Galstyan · Greg Ver Steeg · Hrant Khachatrian · Marc Górriz · Mihaela van der Schaar · Anton Nemchenko · Manasi Patwardhan · Tanay Tandon -
2017 Poster: Information-theoretic analysis of generalization capability of learning algorithms »
Aolin Xu · Maxim Raginsky -
2017 Spotlight: Information-theoretic analysis of generalization capability of learning algorithms »
Aolin Xu · Maxim Raginsky -
2016 Poster: Variational Information Maximization for Feature Selection »
Shuyang Gao · Greg Ver Steeg · Aram Galstyan -
2014 Poster: Discovering Structure in High-Dimensional Data Through Correlation Explanation »
Greg Ver Steeg · Aram Galstyan -
2011 Poster: Lower Bounds for Passive and Active Learning »
Maxim Raginsky · Sasha Rakhlin -
2011 Spotlight: Lower Bounds for Passive and Active Learning »
Maxim Raginsky · Sasha Rakhlin -
2011 Poster: Comparative Analysis of Viterbi Training and Maximum Likelihood Estimation for HMMs »
Armen Allahverdyan · Aram Galstyan -
2009 Poster: Locality-sensitive binary codes from shift-invariant kernels »
Maxim Raginsky · Svetlana Lazebnik -
2009 Oral: Locality-Sensitive Binary Codes from Shift-Invariant Kernels »
Maxim Raginsky · Svetlana Lazebnik -
2008 Poster: Near-minimax recursive density estimation on the binary hypercube »
Maxim Raginsky · Svetlana Lazebnik · Rebecca Willett · Jorge G Silva -
2008 Spotlight: Near-minimax recursive density estimation on the binary hypercube »
Maxim Raginsky · Svetlana Lazebnik · Rebecca Willett · Jorge G Silva