Timezone: »
There have been long-standing controversies and inconsistencies over the experiment setup and criteria for identifying the "winning ticket" in literature. To reconcile such, we revisit the definition of lottery ticket hypothesis, with comprehensive and more rigorous conditions. Under our new definition, we show concrete evidence to clarify whether the winning ticket exists across the major DNN architectures and/or applications. Through extensive experiments, we perform quantitative analysis on the correlations between winning tickets and various experimental factors, and empirically study the patterns of our observations. We find that the key training hyperparameters, such as learning rate and training epochs, as well as the architecture characteristics such as capacities and residual connections, are all highly correlated with whether and when the winning tickets can be identified. Based on our analysis, we summarize a guideline for parameter settings in regards of specific architecture characteristics, which we hope to catalyze the research progress on the topic of lottery ticket hypothesis. Our codes are publicly available at: https://github.com/boone891214/sanity-check-LTH.
Author Information
Xiaolong Ma (Northeastern University)
Geng Yuan (Northeastern University)
Xuan Shen (Northeastern University)
Tianlong Chen (Unversity of Texas at Austin)
Xuxi Chen (UT Austin)
Xiaohan Chen (The University of Texas at Austin)
Ning Liu (Midea)
Minghai Qin (WDC Research)
Sijia Liu (Michigan State University)
Zhangyang Wang (UT Austin)
Yanzhi Wang (Northeastern University)
More from the Same Authors
-
2020 : Paper 20: YOLObile: Real-Time Object Detection on Mobile Devices via Compression-Compilation Co-Design »
YUXUAN CAI · Wei Niu · Yanzhi Wang -
2021 Spotlight: MEST: Accurate and Fast Memory-Economic Sparse Training Framework on the Edge »
Geng Yuan · Xiaolong Ma · Wei Niu · Zhengang Li · Zhenglun Kong · Ning Liu · Yifan Gong · Zheng Zhan · Chaoyang He · Qing Jin · Siyue Wang · Minghai Qin · Bin Ren · Yanzhi Wang · Sijia Liu · Xue Lin -
2021 : Sign-MAML: Efficient Model-Agnostic Meta-Learning by SignSGD »
Chen Fan · Parikshit Ram · Sijia Liu -
2022 Poster: Teach Less, Learn More: On the Undistillable Classes in Knowledge Distillation »
Yichen Zhu · Ning Liu · Zhiyuan Xu · Xin Liu · Weibin Meng · Louis Wang · Zhicai Ou · Jian Tang -
2022 : HotProtein: A Novel Framework for Protein Thermostability Prediction and Editing »
Tianlong Chen · Chengyue Gong · Daniel Diaz · Xuxi Chen · Jordan Wells · Qiang Liu · Zhangyang Wang · Andrew Ellington · Alex Dimakis · Adam Klivans -
2022 : On the Robustness of deep learning-based MRI Reconstruction to image transformations »
jinghan jia · Mingyi Hong · Yimeng Zhang · Mehmet Akcakaya · Sijia Liu -
2022 : Visual Prompting for Adversarial Robustness »
Aochuan Chen · Peter Lorenz · Yuguang Yao · Pin-Yu Chen · Sijia Liu -
2022 : Visual Prompting for Adversarial Robustness »
Aochuan Chen · Peter Lorenz · Yuguang Yao · Pin-Yu Chen · Sijia Liu -
2023 Poster: Text-to-Image Diffusion Model on Mobile Devices within Two Seconds »
Yanyu Li · Huan Wang · Qing Jin · Ju Hu · Pavlo Chemerys · Yun Fu · Yanzhi Wang · Sergey Tulyakov · Jian Ren -
2023 Poster: The Emergence of Essential Sparsity in Large Pre-trained Models: The Weights that Matter »
AJAY JAISWAL · Shiwei Liu · Tianlong Chen · Zhangyang Wang -
2023 Poster: H2O: Heavy-Hitter Oracle for Efficient Generative Inference of Large Language Models »
Zhenyu Zhang · Ying Sheng · Tianyi Zhou · Tianlong Chen · Lianmin Zheng · Ruisi Cai · Zhao Song · Yuandong Tian · Christopher Ré · Clark Barrett · Zhangyang Wang · Beidi Chen -
2023 Poster: PackQViT: Faster Sub-8-bit Vision Transformers via Full and Packed Quantization on the Mobile »
PEIYAN DONG · LEI LU · Chao Wu · Cheng Lyu · Geng Yuan · Hao Tang · Yanzhi Wang -
2023 Poster: On the Convergence and Sample Complexity Analysis of Deep Q-Networks with $\epsilon$-Greedy Exploration »
Shuai Zhang · Meng Wang · Hongkang Li · Miao Liu · Pin-Yu Chen · Songtao Lu · Sijia Liu · Keerthiram Murugesan · Subhajit Chaudhury -
2023 Poster: Selectivity Drives Productivity: Efficient Dataset Pruning for Enhanced Transfer Learning »
Yihua Zhang · Yimeng Zhang · Aochuan Chen · jinghan jia · Jiancheng Liu · Gaowen Liu · Mingyi Hong · Shiyu Chang · Sijia Liu -
2023 Poster: HotBEV: Hardware-oriented Transformer-based Multi-View 3D Detector for BEV Perception »
PEIYAN DONG · Zhenglun Kong · Xin Meng · Pinrui Yu · Yifan Gong · Geng Yuan · Hao Tang · Yanzhi Wang -
2023 Poster: Model Sparsity Can Simplify Machine Unlearning »
jinghan jia · Jiancheng Liu · Parikshit Ram · Yuguang Yao · Gaowen Liu · Yang Liu · PRANAY SHARMA · Sijia Liu -
2022 Spotlight: Sparse Winning Tickets are Data-Efficient Image Recognizers »
Mukund Varma T · Xuxi Chen · Zhenyu Zhang · Tianlong Chen · Subhashini Venugopalan · Zhangyang Wang -
2022 Spotlight: Teach Less, Learn More: On the Undistillable Classes in Knowledge Distillation »
Yichen Zhu · Ning Liu · Zhiyuan Xu · Xin Liu · Weibin Meng · Louis Wang · Zhicai Ou · Jian Tang -
2022 : Q & A »
Sayak Paul · Sijia Liu · Pin-Yu Chen -
2022 : Deep dive on foundation models for code »
Sijia Liu -
2022 Tutorial: Foundational Robustness of Foundation Models »
Pin-Yu Chen · Sijia Liu · Sayak Paul -
2022 : Basics in foundation model and robustness »
Pin-Yu Chen · Sijia Liu -
2022 Poster: Randomized Channel Shuffling: Minimal-Overhead Backdoor Attack Detection without Clean Datasets »
Ruisi Cai · Zhenyu Zhang · Tianlong Chen · Xiaohan Chen · Zhangyang Wang -
2022 Poster: Augmentations in Hypergraph Contrastive Learning: Fabricated and Generative »
Tianxin Wei · Yuning You · Tianlong Chen · Yang Shen · Jingrui He · Zhangyang Wang -
2022 Poster: Fairness Reprogramming »
Guanhua Zhang · Yihua Zhang · Yang Zhang · Wenqi Fan · Qing Li · Sijia Liu · Shiyu Chang -
2022 Poster: Back Razor: Memory-Efficient Transfer Learning by Self-Sparsified Backpropagation »
Ziyu Jiang · Xuxi Chen · Xueqin Huang · Xianzhi Du · Denny Zhou · Zhangyang Wang -
2022 Poster: Sparse Winning Tickets are Data-Efficient Image Recognizers »
Mukund Varma T · Xuxi Chen · Zhenyu Zhang · Tianlong Chen · Subhashini Venugopalan · Zhangyang Wang -
2022 Poster: M³ViT: Mixture-of-Experts Vision Transformer for Efficient Multi-task Learning with Model-Accelerator Co-design »
hanxue liang · Zhiwen Fan · Rishov Sarkar · Ziyu Jiang · Tianlong Chen · Kai Zou · Yu Cheng · Cong Hao · Zhangyang Wang -
2022 Poster: Old can be Gold: Better Gradient Flow can Make Vanilla-GCNs Great Again »
AJAY JAISWAL · Peihao Wang · Tianlong Chen · Justin Rousseau · Ying Ding · Zhangyang Wang -
2022 Poster: SparCL: Sparse Continual Learning on the Edge »
Zifeng Wang · Zheng Zhan · Yifan Gong · Geng Yuan · Wei Niu · Tong Jian · Bin Ren · Stratis Ioannidis · Yanzhi Wang · Jennifer Dy -
2022 Poster: Advancing Model Pruning via Bi-level Optimization »
Yihua Zhang · Yuguang Yao · Parikshit Ram · Pu Zhao · Tianlong Chen · Mingyi Hong · Yanzhi Wang · Sijia Liu -
2022 Poster: Layer Freezing & Data Sieving: Missing Pieces of a Generic Framework for Sparse Training »
Geng Yuan · Yanyu Li · Sheng Li · Zhenglun Kong · Sergey Tulyakov · Xulong Tang · Yanzhi Wang · Jian Ren -
2022 Poster: EfficientFormer: Vision Transformers at MobileNet Speed »
Yanyu Li · Geng Yuan · Yang Wen · Ju Hu · Georgios Evangelidis · Sergey Tulyakov · Yanzhi Wang · Jian Ren -
2022 Poster: A Comprehensive Study on Large-Scale Graph Training: Benchmarking and Rethinking »
Keyu Duan · Zirui Liu · Peihao Wang · Wenqing Zheng · Kaixiong Zhou · Tianlong Chen · Xia Hu · Zhangyang Wang -
2021 Poster: Improving Contrastive Learning on Imbalanced Data via Open-World Sampling »
Ziyu Jiang · Tianlong Chen · Ting Chen · Zhangyang Wang -
2021 Poster: Sparse Training via Boosting Pruning Plasticity with Neuroregeneration »
Shiwei Liu · Tianlong Chen · Xiaohan Chen · Zahra Atashgahi · Lu Yin · Huanyu Kou · Li Shen · Mykola Pechenizkiy · Zhangyang Wang · Decebal Constantin Mocanu -
2021 Poster: Stronger NAS with Weaker Predictors »
Junru Wu · Xiyang Dai · Dongdong Chen · Yinpeng Chen · Mengchen Liu · Ye Yu · Zhangyang Wang · Zicheng Liu · Mei Chen · Lu Yuan -
2021 Poster: Why Lottery Ticket Wins? A Theoretical Perspective of Sample Complexity on Sparse Neural Networks »
Shuai Zhang · Meng Wang · Sijia Liu · Pin-Yu Chen · Jinjun Xiong -
2021 Poster: IA-RED$^2$: Interpretability-Aware Redundancy Reduction for Vision Transformers »
Bowen Pan · Rameswar Panda · Yifan Jiang · Zhangyang Wang · Rogerio Feris · Aude Oliva -
2021 Poster: ScaleCert: Scalable Certified Defense against Adversarial Patches with Sparse Superficial Layers »
Husheng Han · Kaidi Xu · Xing Hu · Xiaobing Chen · LING LIANG · Zidong Du · Qi Guo · Yanzhi Wang · Yunji Chen -
2021 Poster: Hyperparameter Tuning is All You Need for LISTA »
Xiaohan Chen · Jialin Liu · Zhangyang Wang · Wotao Yin -
2021 Poster: Adversarial Attack Generation Empowered by Min-Max Optimization »
Jingkang Wang · Tianyun Zhang · Sijia Liu · Pin-Yu Chen · Jiacen Xu · Makan Fardad · Bo Li -
2021 Poster: Chasing Sparsity in Vision Transformers: An End-to-End Exploration »
Tianlong Chen · Yu Cheng · Zhe Gan · Lu Yuan · Lei Zhang · Zhangyang Wang -
2021 Poster: Data-Efficient GAN Training Beyond (Just) Augmentations: A Lottery Ticket Perspective »
Tianlong Chen · Yu Cheng · Zhe Gan · Jingjing Liu · Zhangyang Wang -
2021 Poster: TransGAN: Two Pure Transformers Can Make One Strong GAN, and That Can Scale Up »
Yifan Jiang · Shiyu Chang · Zhangyang Wang -
2021 Poster: AugMax: Adversarial Composition of Random Augmentations for Robust Training »
Haotao Wang · Chaowei Xiao · Jean Kossaifi · Zhiding Yu · Anima Anandkumar · Zhangyang Wang -
2021 Poster: Delayed Propagation Transformer: A Universal Computation Engine towards Practical Control in Cyber-Physical Systems »
Wenqing Zheng · Qiangqiang Guo · Hao Yang · Peihao Wang · Zhangyang Wang -
2021 Poster: The Elastic Lottery Ticket Hypothesis »
Xiaohan Chen · Yu Cheng · Shuohang Wang · Zhe Gan · Jingjing Liu · Zhangyang Wang -
2021 Poster: When does Contrastive Learning Preserve Adversarial Robustness from Pretraining to Finetuning? »
Lijie Fan · Sijia Liu · Pin-Yu Chen · Gaoyuan Zhang · Chuang Gan -
2021 Poster: MEST: Accurate and Fast Memory-Economic Sparse Training Framework on the Edge »
Geng Yuan · Xiaolong Ma · Wei Niu · Zhengang Li · Zhenglun Kong · Ning Liu · Yifan Gong · Zheng Zhan · Chaoyang He · Qing Jin · Siyue Wang · Minghai Qin · Bin Ren · Yanzhi Wang · Sijia Liu · Xue Lin -
2021 Poster: You are caught stealing my winning lottery ticket! Making a lottery ticket claim its ownership »
Xuxi Chen · Tianlong Chen · Zhenyu Zhang · Zhangyang Wang -
2020 Workshop: International Workshop on Scalability, Privacy, and Security in Federated Learning (SpicyFL 2020) »
Xiaolin Andy Li · Dejing Dou · Ameet Talwalkar · Hongyu Li · Jianzong Wang · Yanzhi Wang -
2020 Poster: Graph Contrastive Learning with Augmentations »
Yuning You · Tianlong Chen · Yongduo Sui · Ting Chen · Zhangyang Wang · Yang Shen -
2020 Poster: Robust Pre-Training by Adversarial Contrastive Learning »
Ziyu Jiang · Tianlong Chen · Ting Chen · Zhangyang Wang -
2020 Poster: Training Stronger Baselines for Learning to Optimize »
Tianlong Chen · Weiyi Zhang · Zhou Jingyang · Shiyu Chang · Sijia Liu · Lisa Amini · Zhangyang Wang -
2020 Spotlight: Training Stronger Baselines for Learning to Optimize »
Tianlong Chen · Weiyi Zhang · Zhou Jingyang · Shiyu Chang · Sijia Liu · Lisa Amini · Zhangyang Wang -
2020 Poster: Once-for-All Adversarial Training: In-Situ Tradeoff between Robustness and Accuracy for Free »
Haotao Wang · Tianlong Chen · Shupeng Gui · TingKuei Hu · Ji Liu · Zhangyang Wang -
2020 Poster: Higher-Order Certification For Randomized Smoothing »
Jeet Mohapatra · Ching-Yun Ko · Tsui-Wei Weng · Pin-Yu Chen · Sijia Liu · Luca Daniel -
2020 Poster: The Lottery Ticket Hypothesis for Pre-trained BERT Networks »
Tianlong Chen · Jonathan Frankle · Shiyu Chang · Sijia Liu · Yang Zhang · Zhangyang Wang · Michael Carbin -
2020 Spotlight: Higher-Order Certification For Randomized Smoothing »
Jeet Mohapatra · Ching-Yun Ko · Tsui-Wei Weng · Pin-Yu Chen · Sijia Liu · Luca Daniel -
2019 Workshop: AI for Humanitarian Assistance and Disaster Response »
Ritwik Gupta · Robin Murphy · Trevor Darrell · Eric Heim · Zhangyang Wang · Bryce Goodman · Piotr Biliński -
2019 Poster: E2-Train: Training State-of-the-art CNNs with Over 80% Less Energy »
Ziyu Jiang · Yue Wang · Xiaohan Chen · Pengfei Xu · Yang Zhao · Yingyan Lin · Zhangyang Wang -
2019 Poster: Learning to Optimize in Swarms »
Yue Cao · Tianlong Chen · Zhangyang Wang · Yang Shen -
2019 Poster: Model Compression with Adversarial Robustness: A Unified Optimization Framework »
Shupeng Gui · Haotao Wang · Haichuan Yang · Chen Yu · Zhangyang Wang · Ji Liu -
2018 Poster: Can We Gain More from Orthogonality Regularizations in Training Deep Networks? »
Nitin Bansal · Xiaohan Chen · Zhangyang Wang -
2018 Poster: Theoretical Linear Convergence of Unfolded ISTA and Its Practical Weights and Thresholds »
Xiaohan Chen · Jialin Liu · Zhangyang Wang · Wotao Yin -
2018 Spotlight: Theoretical Linear Convergence of Unfolded ISTA and Its Practical Weights and Thresholds »
Xiaohan Chen · Jialin Liu · Zhangyang Wang · Wotao Yin