Timezone: »
Mixup is a regularization technique that artificially produces new samples using convex combinations of original training points. This simple technique has shown strong empirical performance, and has been heavily used as part of semi-supervised learning techniques such as mixmatch~\citep{berthelot2019mixmatch} and interpolation consistent training (ICT)~\citep{verma2019interpolation}. In this paper, we look at mixup through a representation learning lens in a semi-supervised learning setup. In particular, we study the role of mixup in promoting linearity in the learned network representations. Towards this, we study two questions: (1) how does the mixup loss that enforces linearity in the last network layer propagate the linearity to the earlier layers?; and (2) how does the enforcement of stronger mixup loss on more than two data points affect the convergence of training? We empirically investigate these properties of mixup on vision datasets such as CIFAR-10, CIFAR-100 and SVHN. Our results show that supervised mixup training does not make all the network layers linear;in fact the intermediate layers become more non-linear during mixup training compared to a network that is trained without mixup. However, when mixup is used as an unsupervised loss, we observe that all the network layers become more linear resulting in faster training convergence.
Author Information
Arslan Chaudhry (DeepMind)
I am a Research Scientist at DeepMind in Mountain View. I am interested in Machine Learning models that can learn efficiently from multiple tasks. Towards this, I study continual, meta- and transfer learning.
Aditya Menon (Google)
Andreas Veit (Google)
Sadeep Jayasumana (Google)
Srikumar Ramalingam (Google)
Sanjiv Kumar (Google Research)
More from the Same Authors
-
2021 : An Empirical Study of Pre-trained Models on Out-of-distribution Generalization »
Yaodong Yu · Heinrich Jiang · Dara Bahri · Hossein Mobahi · Seungyeon Kim · Ankit Rawat · Andreas Veit · Yi Ma -
2022 Poster: TPU-KNN: K Nearest Neighbor Search at Peak FLOP/s »
Felix Chern · Blake Hechtman · Andy Davis · Ruiqi Guo · David Majnemer · Sanjiv Kumar -
2022 Poster: Decoupled Context Processing for Context Augmented Language Modeling »
Zonglin Li · Ruiqi Guo · Sanjiv Kumar -
2022 Poster: Post-hoc estimators for learning to defer to an expert »
Harikrishna Narasimhan · Wittawat Jitkrittum · Aditya Menon · Ankit Rawat · Sanjiv Kumar -
2021 Poster: Batch Active Learning at Scale »
Gui Citovsky · Giulia DeSalvo · Claudio Gentile · Lazaros Karydas · Anand Rajagopalan · Afshin Rostamizadeh · Sanjiv Kumar -
2021 Poster: Training Over-parameterized Models with Non-decomposable Objectives »
Harikrishna Narasimhan · Aditya Menon -
2021 Poster: Efficient Training of Retrieval Models using Negative Cache »
Erik Lindgren · Sashank Reddi · Ruiqi Guo · Sanjiv Kumar -
2021 Poster: Scaling Up Exact Neural Network Compression by ReLU Stability »
Thiago Serra · Xin Yu · Abhinav Kumar · Srikumar Ramalingam -
2020 Poster: Why are Adaptive Methods Good for Attention Models? »
Jingzhao Zhang · Sai Praneeth Karimireddy · Andreas Veit · Seungyeon Kim · Sashank Reddi · Sanjiv Kumar · Suvrit Sra -
2020 Poster: Multi-Stage Influence Function »
Hongge Chen · Si Si · Yang Li · Ciprian Chelba · Sanjiv Kumar · Duane Boning · Cho-Jui Hsieh -
2020 Poster: O(n) Connections are Expressive Enough: Universal Approximability of Sparse Transformers »
Chulhee Yun · Yin-Wen Chang · Srinadh Bhojanapalli · Ankit Singh Rawat · Sashank Reddi · Sanjiv Kumar -
2020 Poster: Robust large-margin learning in hyperbolic space »
Melanie Weber · Manzil Zaheer · Ankit Singh Rawat · Aditya Menon · Sanjiv Kumar -
2020 Poster: Learning discrete distributions: user vs item-level privacy »
Yuhan Liu · Ananda Theertha Suresh · Felix Xinnan Yu · Sanjiv Kumar · Michael D Riley -
2019 Poster: Breaking the Glass Ceiling for Embedding-Based Classifiers for Large Output Spaces »
Chuan Guo · Ali Mousavi · Xiang Wu · Daniel Holtmann-Rice · Satyen Kale · Sashank Reddi · Sanjiv Kumar -
2019 Poster: Noise-tolerant fair classification »
Alex Lamy · Ziyuan Zhong · Aditya Menon · Nakul Verma -
2019 Poster: Multilabel reductions: what is my loss optimising? »
Aditya Menon · Ankit Singh Rawat · Sashank Reddi · Sanjiv Kumar -
2019 Spotlight: Multilabel reductions: what is my loss optimising? »
Aditya Menon · Ankit Singh Rawat · Sashank Reddi · Sanjiv Kumar -
2019 Poster: Sampled Softmax with Random Fourier Features »
Ankit Singh Rawat · Jiecao Chen · Felix Xinnan Yu · Ananda Theertha Suresh · Sanjiv Kumar -
2018 Poster: Adaptive Methods for Nonconvex Optimization »
Manzil Zaheer · Sashank Reddi · Devendra S Sachan · Satyen Kale · Sanjiv Kumar -
2018 Poster: cpSGD: Communication-efficient and differentially-private distributed SGD »
Naman Agarwal · Ananda Theertha Suresh · Felix Xinnan Yu · Sanjiv Kumar · Brendan McMahan -
2018 Spotlight: cpSGD: Communication-efficient and differentially-private distributed SGD »
Naman Agarwal · Ananda Theertha Suresh · Felix Xinnan Yu · Sanjiv Kumar · Brendan McMahan -
2017 : Now Playing: Continuous low-power music recognition »
Marvin Ritter · Ruiqi Guo · Sanjiv Kumar · Julian J Odell · Mihajlo Velimirović · Dominik Roblek · James Lyon -
2017 : Poster Session 1 and Lunch »
Sumanth Dathathri · Akshay Rangamani · Prakhar Sharma · Aruni RoyChowdhury · Madhu Advani · William Guss · Chulhee Yun · Corentin Hardy · Michele Alberti · Devendra Sachan · Andreas Veit · Takashi Shinozaki · Peter Chin -
2017 Poster: Multiscale Quantization for Fast Similarity Search »
Xiang Wu · Ruiqi Guo · Ananda Theertha Suresh · Sanjiv Kumar · Daniel Holtmann-Rice · David Simcha · Felix Yu -
2016 Poster: Orthogonal Random Features »
Felix Xinnan Yu · Ananda Theertha Suresh · Krzysztof M Choromanski · Daniel Holtmann-Rice · Sanjiv Kumar -
2016 Oral: Orthogonal Random Features »
Felix Xinnan Yu · Ananda Theertha Suresh · Krzysztof M Choromanski · Daniel Holtmann-Rice · Sanjiv Kumar -
2016 Poster: Residual Networks Behave Like Ensembles of Relatively Shallow Networks »
Andreas Veit · Michael J Wilber · Serge Belongie -
2015 Workshop: The 1st International Workshop "Feature Extraction: Modern Questions and Challenges" »
Dmitry Storcheus · Sanjiv Kumar · Afshin Rostamizadeh -
2015 Poster: Spherical Random Features for Polynomial Kernels »
Jeffrey Pennington · Felix Yu · Sanjiv Kumar -
2015 Spotlight: Spherical Random Features for Polynomial Kernels »
Jeffrey Pennington · Felix Yu · Sanjiv Kumar -
2015 Poster: Structured Transforms for Small-Footprint Deep Learning »
Vikas Sindhwani · Tara Sainath · Sanjiv Kumar -
2015 Spotlight: Structured Transforms for Small-Footprint Deep Learning »
Vikas Sindhwani · Tara Sainath · Sanjiv Kumar -
2014 Session: Oral Session 8 »
Sanjiv Kumar -
2014 Poster: Discrete Graph Hashing »
Wei Liu · Cun Mu · Sanjiv Kumar · Shih-Fu Chang -
2014 Spotlight: Discrete Graph Hashing »
Wei Liu · Cun Mu · Sanjiv Kumar · Shih-Fu Chang -
2012 Poster: Angular Quantization based Binary Codes for Fast Similarity Search »
Yunchao Gong · Sanjiv Kumar · Vishal Verma · Svetlana Lazebnik -
2009 Poster: Ensemble Nystrom Method »
Sanjiv Kumar · Mehryar Mohri · Ameet S Talwalkar