Timezone: »
Strong Lottery Ticket Hypothesis with $\epsilon$–perturbation
Fangshuo Liao · Zheyang Xiong · Anastasios Kyrillidis
Event URL: https://openreview.net/forum?id=u1oRgxaAotJ »
The strong Lottery Ticket Hypothesis (LTH) claims that there exists a subnetwork in a sufficiently large, randomly initialized neural network that approximates some target neural networks without the need of training. This work extends the theoretical guarantee of the strong LTH literature to a scenario more similar to the original LTH, by generalizing the weight change achieved in the pre-training step to some perturbation around the initialization.In particular, we focus on the following open questions: By allowing an $\varepsilon$-scale perturbation on the random initial weights, can we reduce the over-parameterization requirement for the candidate network in the strong LTH? Furthermore, does the weight change by SGD coincide with a good set of such perturbation?
The strong Lottery Ticket Hypothesis (LTH) claims that there exists a subnetwork in a sufficiently large, randomly initialized neural network that approximates some target neural networks without the need of training. This work extends the theoretical guarantee of the strong LTH literature to a scenario more similar to the original LTH, by generalizing the weight change achieved in the pre-training step to some perturbation around the initialization.In particular, we focus on the following open questions: By allowing an $\varepsilon$-scale perturbation on the random initial weights, can we reduce the over-parameterization requirement for the candidate network in the strong LTH? Furthermore, does the weight change by SGD coincide with a good set of such perturbation?
Author Information
Fangshuo Liao (Rice University)
Zheyang Xiong (Rice University)
Anastasios Kyrillidis (Rice University)
Related Events (a corresponding poster, oral, or spotlight)
-
2022 : Strong Lottery Ticket Hypothesis with $\epsilon$–perturbation »
Dates n/a. Room
More from the Same Authors
-
2021 : Acceleration and Stability of the Stochastic Proximal Point Algorithm »
Junhyung Lyle Kim · Panos Toulis · Anastasios Kyrillidis -
2021 : Acceleration and Stability of the Stochastic Proximal Point Algorithm »
Junhyung Lyle Kim · Panos Toulis · Anastasios Kyrillidis -
2022 : LOFT: Finding Lottery Tickets through Filter-wise Training »
Qihan Wang · Chen Dun · Fangshuo Liao · Christopher Jermaine · Anastasios Kyrillidis -
2022 : Efficient and Light-Weight Federated Learning via Asynchronous Distributed Dropout »
Chen Dun · Mirian Hipolito Garcia · Dimitrios Dimitriadis · Christopher Jermaine · Anastasios Kyrillidis -
2022 : GIST: Distributed Training for Large-Scale Graph Convolutional Networks »
Cameron Wolfe · Jingkang Yang · Fangshuo Liao · Arindam Chowdhury · Chen Dun · Artun Bayer · Santiago Segarra · Anastasios Kyrillidis -
2022 : Poster Session 2 »
Jinwuk Seok · Bo Liu · Ryotaro Mitsuboshi · David Martinez-Rubio · Weiqiang Zheng · Ilgee Hong · Chen Fan · Kazusato Oko · Bo Tang · Miao Cheng · Aaron Defazio · Tim G. J. Rudner · Gabriele Farina · Vishwak Srinivasan · Ruichen Jiang · Peng Wang · Jane Lee · Nathan Wycoff · Nikhil Ghosh · Yinbin Han · David Mueller · Liu Yang · Amrutha Varshini Ramesh · Siqi Zhang · Kaifeng Lyu · David Yunis · Kumar Kshitij Patel · Fangshuo Liao · Dmitrii Avdiukhin · Xiang Li · Sattar Vakili · Jiaxin Shi -
2022 : Contributed Talks 3 »
Cristóbal Guzmán · Fangshuo Liao · Vishwak Srinivasan · Zhiyuan Li -
2019 : Final remarks »
Anastasios Kyrillidis · Albert Berahas · Fred Roosta · Michael Mahoney -
2019 Workshop: Beyond first order methods in machine learning systems »
Anastasios Kyrillidis · Albert Berahas · Fred Roosta · Michael Mahoney -
2019 : Opening Remarks »
Anastasios Kyrillidis · Albert Berahas · Fred Roosta · Michael Mahoney -
2019 Poster: Learning Sparse Distributions using Iterative Hard Thresholding »
Jacky Zhang · Rajiv Khanna · Anastasios Kyrillidis · Sanmi Koyejo