Timezone: »
Recent advances in associative memory design through structured pattern sets and graph-based inference algorithms have allowed reliable learning and recall of an exponential number of patterns. Although these designs correct external errors in recall, they assume neurons that compute noiselessly, in contrast to the highly variable neurons in hippocampus and olfactory cortex. Here we consider associative memories with noisy internal computations and analytically characterize performance. As long as the internal noise level is below a specified threshold, the error probability in the recall phase can be made exceedingly small. More surprisingly, we show that internal noise actually improves the performance of the recall phase. Computational experiments lend additional support to our theoretical analysis. This work suggests a functional benefit to noisy neurons in biological neuronal networks.
Author Information
Amin Karbasi (Yale University)
Amir Hesam Salavati (EPFL)
Amin Shokrollahi (EPFL)
Lav R Varshney (IBM Watson Research Center)
Related Events (a corresponding poster, oral, or spotlight)
-
2013 Spotlight: Noise-Enhanced Associative Memories »
Fri. Dec 6th 11:38 -- 11:42 PM Room Harvey's Convention Center Floor, CC
More from the Same Authors
-
2022 : Exact Gradient Computation for Spiking Neural Networks »
Jane Lee · Saeid Haghighatshoar · Amin Karbasi -
2023 Poster: Optimal Learners for Realizable Regression: PAC Learning and Online Learning »
Idan Attias · Steve Hanneke · Alkis Kalavasis · Amin Karbasi · Grigoris Velegkas -
2023 Poster: Optimal Guarantees for Algorithmic Reproducibility and Gradient Complexity in Convex Optimization »
Liang Zhang · Junchi YANG · Amin Karbasi · Niao He -
2023 Poster: Replicability in Reinforcement Learning »
Amin Karbasi · Grigoris Velegkas · Lin Yang · Felix Zhou -
2023 Poster: Replicable Clustering »
Hossein Esfandiari · Amin Karbasi · Vahab Mirrokni · Grigoris Velegkas · Felix Zhou -
2023 Oral: Optimal Learners for Realizable Regression: PAC Learning and Online Learning »
Idan Attias · Steve Hanneke · Alkis Kalavasis · Amin Karbasi · Grigoris Velegkas -
2022 Poster: Submodular Maximization in Clean Linear Time »
Wenxin Li · Moran Feldman · Ehsan Kazemi · Amin Karbasi -
2022 Poster: Universal Rates for Interactive Learning »
Steve Hanneke · Amin Karbasi · Shay Moran · Grigoris Velegkas -
2022 Poster: Black-Box Generalization: Stability of Zeroth-Order Learning »
Konstantinos Nikolakakis · Farzin Haddadpour · Dionysis Kalogerias · Amin Karbasi -
2022 Poster: Reinforcement Learning with Logarithmic Regret and Policy Switches »
Grigoris Velegkas · Zhuoran Yang · Amin Karbasi -
2022 Poster: Multiclass Learnability Beyond the PAC Framework: Universal Rates and Partial Concept Classes »
Alkis Kalavasis · Grigoris Velegkas · Amin Karbasi -
2022 Poster: Fast Neural Kernel Embeddings for General Activations »
Insu Han · Amir Zandieh · Jaehoon Lee · Roman Novak · Lechao Xiao · Amin Karbasi -
2022 Poster: On Optimal Learning Under Targeted Data Poisoning »
Steve Hanneke · Amin Karbasi · Mohammad Mahmoody · Idan Mehalel · Shay Moran -
2021 Poster: An Exponential Improvement on the Memorization Capacity of Deep Threshold Networks »
Shashank Rajput · Kartik Sreenivasan · Dimitris Papailiopoulos · Amin Karbasi -
2021 Poster: Multiple Descent: Design Your Own Generalization Curve »
Lin Chen · Yifei Min · Mikhail Belkin · Amin Karbasi -
2021 Poster: Parallelizing Thompson Sampling »
Amin Karbasi · Vahab Mirrokni · Mohammad Shadravan -
2021 Poster: Submodular + Concave »
Siddharth Mitra · Moran Feldman · Amin Karbasi -
2013 Poster: Distributed Submodular Maximization: Identifying Representative Elements in Massive Data »
Baharan Mirzasoleiman · Amin Karbasi · Rik Sarkar · Andreas Krause -
2011 Poster: Neural Reconstruction with Approximate Message Passing (NeuRAMP) »
Alyson Fletcher · Sundeep Rangan · Lav R Varshney · Aniruddha Bhargava