Timezone: »
Spiking neural networks (SNNs) have recently emerged as an alternative to traditional neural networks, holding promise for energy efficiency benefits. However, the classic backpropagation algorithm for training traditional networks has been notoriously difficult to apply to SNNs due to the hard-thresholding and discontinuities at spike times. Therefore, a large majority of prior work believes that exact gradients for SNN w.r.t. their weights do not exist and has focused on approximation methods to produce surrogate gradients. In this paper, (1)\,by applying the implicit function theorem to SNN at the discrete spike times, we prove that, albeit being non-differentiable in time, SNNs have well-defined gradients w.r.t. their weights, and (2)\,we propose a novel training algorithm, called \emph{forward propagation} (FP), that computes exact gradients for SNNs. Our derivation of FP in this paper provides insights on why other related algorithms such as Hebbian learning and also recently-proposed surrogate gradient methods may perform well.
Author Information
Jane Lee (Yale University)
Saeid Haghighatshoar (SynSense AG)
Saeid Haghighatshoar received his B.Sc. in electronics and his M.Sc. in communications systems from Sharif University of Technology, Tehran, Iran, in 2007 and 2009, respectively, and his Ph.D. in computer and communication sciences from EPFL, Lausanne, Switzerland, in 2014. Since 2015, he had several R&D positions in signal processing, wireless communications, machine learning, smart sensing and internet-of-things in TU Berlin, Berlin (2015-2019), Germany, and CSEM, Neuchatel, Switzerland (2020-2021). Currently, he is working as a senior R&D machine learning engineer on software and hardware development for spiking neural networks and neuromorphic computation at SynSense, Zurich, Switzerland (www.synsense.ai).
Amin Karbasi (Yale University)
More from the Same Authors
-
2023 Poster: Optimal Learners for Realizable Regression: PAC Learning and Online Learning »
Idan Attias · Steve Hanneke · Alkis Kalavasis · Amin Karbasi · Grigoris Velegkas -
2023 Poster: Optimal Guarantees for Algorithmic Reproducibility and Gradient Complexity in Convex Optimization »
Liang Zhang · Junchi YANG · Amin Karbasi · Niao He -
2023 Poster: Learning Exponential Families from Truncated Samples »
Jane Lee · Andre Wibisono · Emmanouil Zampetakis -
2023 Poster: Replicability in Reinforcement Learning »
Amin Karbasi · Grigoris Velegkas · Lin Yang · Felix Zhou -
2023 Poster: Replicable Clustering »
Hossein Esfandiari · Amin Karbasi · Vahab Mirrokni · Grigoris Velegkas · Felix Zhou -
2023 Oral: Optimal Learners for Realizable Regression: PAC Learning and Online Learning »
Idan Attias · Steve Hanneke · Alkis Kalavasis · Amin Karbasi · Grigoris Velegkas -
2022 : Poster Session 2 »
Jinwuk Seok · Bo Liu · Ryotaro Mitsuboshi · David Martinez-Rubio · Weiqiang Zheng · Ilgee Hong · Chen Fan · Kazusato Oko · Bo Tang · Miao Cheng · Aaron Defazio · Tim G. J. Rudner · Gabriele Farina · Vishwak Srinivasan · Ruichen Jiang · Peng Wang · Jane Lee · Nathan Wycoff · Nikhil Ghosh · Yinbin Han · David Mueller · Liu Yang · Amrutha Varshini Ramesh · Siqi Zhang · Kaifeng Lyu · David Yunis · Kumar Kshitij Patel · Fangshuo Liao · Dmitrii Avdiukhin · Xiang Li · Sattar Vakili · Jiaxin Shi -
2022 Poster: Submodular Maximization in Clean Linear Time »
Wenxin Li · Moran Feldman · Ehsan Kazemi · Amin Karbasi -
2022 Poster: Universal Rates for Interactive Learning »
Steve Hanneke · Amin Karbasi · Shay Moran · Grigoris Velegkas -
2022 Poster: Black-Box Generalization: Stability of Zeroth-Order Learning »
Konstantinos Nikolakakis · Farzin Haddadpour · Dionysis Kalogerias · Amin Karbasi -
2022 Poster: Reinforcement Learning with Logarithmic Regret and Policy Switches »
Grigoris Velegkas · Zhuoran Yang · Amin Karbasi -
2022 Poster: Multiclass Learnability Beyond the PAC Framework: Universal Rates and Partial Concept Classes »
Alkis Kalavasis · Grigoris Velegkas · Amin Karbasi -
2022 Poster: Fast Neural Kernel Embeddings for General Activations »
Insu Han · Amir Zandieh · Jaehoon Lee · Roman Novak · Lechao Xiao · Amin Karbasi -
2022 Poster: On Optimal Learning Under Targeted Data Poisoning »
Steve Hanneke · Amin Karbasi · Mohammad Mahmoody · Idan Mehalel · Shay Moran -
2021 Poster: An Exponential Improvement on the Memorization Capacity of Deep Threshold Networks »
Shashank Rajput · Kartik Sreenivasan · Dimitris Papailiopoulos · Amin Karbasi -
2021 Poster: Multiple Descent: Design Your Own Generalization Curve »
Lin Chen · Yifei Min · Mikhail Belkin · Amin Karbasi -
2021 Poster: Parallelizing Thompson Sampling »
Amin Karbasi · Vahab Mirrokni · Mohammad Shadravan -
2021 Poster: Submodular + Concave »
Siddharth Mitra · Moran Feldman · Amin Karbasi -
2013 Poster: Noise-Enhanced Associative Memories »
Amin Karbasi · Amir Hesam Salavati · Amin Shokrollahi · Lav R Varshney -
2013 Poster: Distributed Submodular Maximization: Identifying Representative Elements in Massive Data »
Baharan Mirzasoleiman · Amin Karbasi · Rik Sarkar · Andreas Krause -
2013 Spotlight: Noise-Enhanced Associative Memories »
Amin Karbasi · Amir Hesam Salavati · Amin Shokrollahi · Lav R Varshney