Timezone: »
Data augmentation has become an important part of modern deep learning pipelines and is typically needed to achieve state of the art performance for many learning tasks. It utilizes invariant transformations of the data, such as rotation, scale, and color shift, and the transformed images are added to the training set. However, these transformations are often chosen heuristically and a clear theoretical framework to explain the performance benefits of data augmentation is not available. In this paper, we develop such a framework to explain data augmentation as averaging over the orbits of the group that keeps the data distribution approximately invariant, and show that it leads to variance reduction. We study finite-sample and asymptotic empirical risk minimization and work out as examples the variance reduction in certain two-layer neural networks. We further propose a strategy to exploit the benefits of data augmentation for general learning tasks.
Author Information
Shuxiao Chen (University of Pennsylvania)
Edgar Dobriban (University of Pennsylvania)
Jane Lee (University of Pennsylvania)
Related Events (a corresponding poster, oral, or spotlight)
-
2020 Oral: A Group-Theoretic Framework for Data Augmentation »
Wed. Dec 9th 02:30 -- 02:45 PM Room Orals & Spotlights: Kernel Methods/Optimization
More from the Same Authors
-
2022 Poster: Fair Bayes-Optimal Classifiers Under Predictive Parity »
Xianli Zeng · Edgar Dobriban · Guang Cheng -
2022 : Exact Gradient Computation for Spiking Neural Networks »
Jane Lee · Saeid Haghighatshoar · Amin Karbasi -
2022 : Poster Session 2 »
Jinwuk Seok · Bo Liu · Ryotaro Mitsuboshi · David Martinez-Rubio · Weiqiang Zheng · Ilgee Hong · Chen Fan · Kazusato Oko · Bo Tang · Miao Cheng · Aaron Defazio · Tim G. J. Rudner · Gabriele Farina · Vishwak Srinivasan · Ruichen Jiang · Peng Wang · Jane Lee · Nathan Wycoff · Nikhil Ghosh · Yinbin Han · David Mueller · Liu Yang · Amrutha Varshini Ramesh · Siqi Zhang · Kaifeng Lyu · David Yunis · Kumar Kshitij Patel · Fangshuo Liao · Dmitrii Avdiukhin · Xiang Li · Sattar Vakili · Jiaxin Shi -
2022 Poster: PAC Prediction Sets for Meta-Learning »
Sangdon Park · Edgar Dobriban · Insup Lee · Osbert Bastani -
2022 Poster: Collaborative Learning of Discrete Distributions under Heterogeneity and Communication Constraints »
Xinmeng Huang · Donghwan Lee · Edgar Dobriban · Hamed Hassani -
2020 Poster: Implicit Regularization and Convergence for Weight Normalization »
Xiaoxia Wu · Edgar Dobriban · Tongzheng Ren · Shanshan Wu · Zhiyuan Li · Suriya Gunasekar · Rachel Ward · Qiang Liu -
2020 Poster: Optimal Iterative Sketching Methods with the Subsampled Randomized Hadamard Transform »
Jonathan Lacotte · Sifan Liu · Edgar Dobriban · Mert Pilanci -
2020 Poster: Label-Aware Neural Tangent Kernel: Toward Better Generalization and Local Elasticity »
Shuxiao Chen · Hangfeng He · Weijie Su