Timezone: »
Understanding deep learning calls for addressing three fundamental questions: expressiveness, optimization and generalization. This talk will describe a series of works aimed at unraveling some of the mysteries behind expressiveness. I will begin by showing that state of the art deep learning architectures, such as convolutional networks, can be represented as tensor networks --- a prominent computational model for quantum many-body simulations. This connection will inspire the use of quantum entanglement for defining measures of data dependencies modeled by deep networks. Next, I will turn to derive a quantum max-flow / min-cut theorem characterizing the entanglement captured by deep networks. The theorem will give rise to new results that shed light on expressiveness in deep learning, and in addition, provide new tools for deep network design. Works covered in the talk were in collaboration with Yoav Levine, Or Sharir, Ronen Tamari, David Yakira and Amnon Shashua.
Author Information
Nadav Cohen (Tel Aviv University)
More from the Same Authors
-
2021 Spotlight: Continuous vs. Discrete Optimization of Deep Neural Networks »
Omer Elkabetz · Nadav Cohen -
2023 Poster: On the Ability of Graph Neural Networks to Model Interactions Between Vertices »
Noam Razin · Tom Verbin · Nadav Cohen -
2023 Poster: What Makes Data Suitable for a Locally Connected Neural Network? A Necessary and Sufficient Condition Based on Quantum Entanglement. »
Yotam Alexander · Nimrod De La Vega · Noam Razin · Nadav Cohen -
2021 : Nadav Cohen »
Nadav Cohen -
2021 : Implicit Regularization in Quantum Tensor Networks »
Nadav Cohen -
2021 Poster: Continuous vs. Discrete Optimization of Deep Neural Networks »
Omer Elkabetz · Nadav Cohen -
2020 : Panel Discussion 1: Theoretical, Algorithmic and Physical »
Jacob Biamonte · Ivan Oseledets · Jens Eisert · Nadav Cohen · Guillaume Rabusseau · Xiao-Yang Liu -
2020 : Invited Talk 2 Q&A by Cohen »
Nadav Cohen -
2020 Workshop: First Workshop on Quantum Tensor Networks in Machine Learning »
Xiao-Yang Liu · Qibin Zhao · Jacob Biamonte · Cesar F Caiafa · Paul Pu Liang · Nadav Cohen · Stefan Leichenauer -
2020 Poster: Implicit Regularization in Deep Learning May Not Be Explainable by Norms »
Noam Razin · Nadav Cohen -
2019 Poster: Implicit Regularization in Deep Matrix Factorization »
Sanjeev Arora · Nadav Cohen · Wei Hu · Yuping Luo -
2019 Spotlight: Implicit Regularization in Deep Matrix Factorization »
Sanjeev Arora · Nadav Cohen · Wei Hu · Yuping Luo -
2018 : Poster Session »
Sujay Sanghavi · Vatsal Shah · Yanyao Shen · Tianchen Zhao · Yuandong Tian · Tomer Galanti · Mufan Li · Gilad Cohen · Daniel Rothchild · Aristide Baratin · Devansh Arpit · Vagelis Papalexakis · Michael Perlmutter · Ashok Vardhan Makkuva · Pim de Haan · Yingyan Lin · Wanmo Kang · Cheolhyoung Lee · Hao Shen · Sho Yaida · Dan Roberts · Nadav Cohen · Philippe Casgrain · Dejiao Zhang · Tengyu Ma · Avinash Ravichandran · Julian Emilio Salazar · Bo Li · Davis Liang · Christopher Wong · Glen Bigan Mbeng · Animesh Garg