Timezone: »
Poster
On the Equivalence between Neural Network and Support Vector Machine
Yilan Chen · Wei Huang · Lam Nguyen · Tsui-Wei Weng
Recent research shows that the dynamics of an infinitely wide neural network (NN) trained by gradient descent can be characterized by Neural Tangent Kernel (NTK) \citep{jacot2018neural}. Under the squared loss, the infinite-width NN trained by gradient descent with an infinitely small learning rate is equivalent to kernel regression with NTK \citep{arora2019exact}. However, the equivalence is only known for ridge regression currently \citep{arora2019harnessing}, while the equivalence between NN and other kernel machines (KMs), e.g. support vector machine (SVM), remains unknown. Therefore, in this work, we propose to establish the equivalence between NN and SVM, and specifically, the infinitely wide NN trained by soft margin loss and the standard soft margin SVM with NTK trained by subgradient descent. Our main theoretical results include establishing the equivalence between NN and a broad family of $\ell_2$ regularized KMs with finite-width bounds, which cannot be handled by prior work, and showing that every finite-width NN trained by such regularized loss functions is approximately a KM. Furthermore, we demonstrate our theory can enable three practical applications, including (i) \textit{non-vacuous} generalization bound of NN via the corresponding KM; (ii) \textit{nontrivial} robustness certificate for the infinite-width NN (while existing robustness verification methods would provide vacuous bounds); (iii) intrinsically more robust infinite-width NNs than those from previous kernel regression.
Author Information
Yilan Chen (University of California, San Diego)
Wei Huang (University of Technology Sydney)
32/255-271 Anzac Parade Kingsford NSW 2032
Lam Nguyen (IBM Research, Thomas J. Watson Research Center)
Tsui-Wei Weng (MIT)
More from the Same Authors
-
2020 : Paper 10: Certified Interpretability Robustness for Class Activation Mapping »
Alex Gu · Tsui-Wei Weng · Pin-Yu Chen · Sijia Liu · Luca Daniel -
2022 : c-MBA: Adversarial Attack for Cooperative MARL Using Learned Dynamics Model »
Nhan H Pham · Lam Nguyen · Jie Chen · Thanh Lam Hoang · Subhro Das · Lily Weng -
2022 Spotlight: Weighted Mutual Learning with Diversity-Driven Model Compression »
Miao Zhang · Li Wang · David Campos · Wei Huang · Chenjuan Guo · Bin Yang -
2022 Spotlight: Interpreting Operation Selection in Differentiable Architecture Search: A Perspective from Influence-Directed Explanations »
Miao Zhang · Wei Huang · Bin Yang -
2022 Poster: Deep Architecture Connectivity Matters for Its Convergence: A Fine-Grained Analysis »
Wuyang Chen · Wei Huang · Xinyu Gong · Boris Hanin · Zhangyang Wang -
2022 Poster: Deep Active Learning by Leveraging Training Dynamics »
Haonan Wang · Wei Huang · Ziwei Wu · Hanghang Tong · Andrew J Margenot · Jingrui He -
2022 Poster: Weighted Mutual Learning with Diversity-Driven Model Compression »
Miao Zhang · Li Wang · David Campos · Wei Huang · Chenjuan Guo · Bin Yang -
2022 Poster: Interpreting Operation Selection in Differentiable Architecture Search: A Perspective from Influence-Directed Explanations »
Miao Zhang · Wei Huang · Bin Yang -
2021 Workshop: New Frontiers in Federated Learning: Privacy, Fairness, Robustness, Personalization and Data Ownership »
Nghia Hoang · Lam Nguyen · Pin-Yu Chen · Tsui-Wei Weng · Sara Magliacane · Bryan Kian Hsiang Low · Anoop Deoras -
2021 Poster: Robust Deep Reinforcement Learning through Adversarial Loss »
Tuomas Oikarinen · Wang Zhang · Alexandre Megretski · Luca Daniel · Tsui-Wei Weng -
2021 Poster: FedDR – Randomized Douglas-Rachford Splitting Algorithms for Nonconvex Federated Composite Optimization »
Quoc Tran Dinh · Nhan H Pham · Dzung Phan · Lam Nguyen -
2021 Poster: Ensembling Graph Predictions for AMR Parsing »
Thanh Lam Hoang · Gabriele Picco · Yufang Hou · Young-Suk Lee · Lam Nguyen · Dzung Phan · Vanessa Lopez · Ramon Fernandez Astudillo -
2020 Poster: Hybrid Variance-Reduced SGD Algorithms For Minimax Problems with Nonconvex-Linear Function »
Quoc Tran Dinh · Deyi Liu · Lam Nguyen -
2020 Poster: A Scalable MIP-based Method for Learning Optimal Multivariate Decision Trees »
Haoran Zhu · Pavankumar Murali · Dzung Phan · Lam Nguyen · Jayant Kalagnanam -
2020 Poster: Higher-Order Certification For Randomized Smoothing »
Jeet Mohapatra · Ching-Yun Ko · Tsui-Wei Weng · Pin-Yu Chen · Sijia Liu · Luca Daniel -
2020 Spotlight: Higher-Order Certification For Randomized Smoothing »
Jeet Mohapatra · Ching-Yun Ko · Tsui-Wei Weng · Pin-Yu Chen · Sijia Liu · Luca Daniel -
2019 Poster: Tight Dimension Independent Lower Bound on the Expected Convergence Rate for Diminishing Step Sizes in SGD »
Ha Nguyen · Lam Nguyen · Marten van Dijk -
2018 Poster: Efficient Neural Network Robustness Certification with General Activation Functions »
Huan Zhang · Tsui-Wei Weng · Pin-Yu Chen · Cho-Jui Hsieh · Luca Daniel