Timezone: »
There is a long history of algorithmic development for solving inverse problems arising in sensing and imaging systems and beyond. Examples include medical and computational imaging, compressive sensing, as well as community detection in networks. Until recently, most algorithms for solving inverse problems in the imaging and network sciences were based on static signal models derived from physics or intuition, such as wavelets or sparse representations.
Today, the best performing approaches for the aforementioned image reconstruction and sensing problems are based on deep learning, which learn various elements of the method including i) signal representations, ii) stepsizes and parameters of iterative algorithms, iii) regularizers, and iv) entire inverse functions. For example, it has recently been shown that solving a variety of inverse problems by transforming an iterative, physics-based algorithm into a deep network whose parameters can be learned from training data, offers faster convergence and/or a better quality solution. Moreover, even with very little or no learning, deep neural networks enable superior performance for classical linear inverse problems such as denoising and compressive sensing. Motivated by those success stories, researchers are redesigning traditional imaging and sensing systems.
However, the field is mostly wide open with a range of theoretical and practical questions unanswered. In particular, deep-neural network based approaches often lack the guarantees of the traditional physics based methods, and while typically superior can make drastic reconstruction errors, such as fantasizing a tumor in an MRI reconstruction.
This workshop aims at bringing together theoreticians and practitioners in order to chart out recent advances and discuss new directions in deep neural network based approaches for solving inverse problems in the imaging and network sciences.
Fri 8:30 a.m. - 8:35 a.m.
|
Opening Remarks
|
Reinhard Heckel · Paul Hand · Alex Dimakis · Joan Bruna · Deanna Needell · Richard Baraniuk 🔗 |
Fri 8:40 a.m. - 9:10 a.m.
|
The spiked matrix model with generative priors
(
Talk
)
Using a low-dimensional parametrization of signals is a generic and powerful way to enhance performance in signal processing and statistical inference. A very popular and widely explored type of dimensionality reduction is sparsity; another type is generative modelling of signal distributions. Generative models based on neural networks, such as GANs or variational auto-encoders, are particularly performant and are gaining on applicability. In this paper we study spiked matrix models, where a low-rank matrix is observed through a noisy channel. This problem with sparse structure of the spikes has attracted broad attention in the past literature. Here, we replace the sparsity assumption by generative modelling, and investigate the consequences on statistical and algorithmic properties. We analyze the Bayes-optimal performance under specific generative models for the spike. In contrast with the sparsity assumption, we do not observe regions of parameters where statistical performance is superior to the best known algorithmic performance. We show that in the analyzed cases the approximate message passing algorithm is able to reach optimal performance. We also design enhanced spectral algorithms and analyze their performance and thresholds using random matrix theory, showing their superiority to the classical principal component analysis. We complement our theoretical results by illustrating the performance of the spectral algorithms when the spikes come from real datasets. |
Lenka Zdeborová 🔗 |
Fri 9:10 a.m. - 9:40 a.m.
|
Robust One-Bit Recovery via ReLU Generative Networks: Improved Statistical Rate and Global Landscape Analysis
(
Talk
)
We study the robust one-bit compressed sensing problem whose goal is to design an algorithm that faithfully recovers any sparse target vector $\theta_0\in\mathbb{R}^d$ \emph{uniformly} from $m$ quantized noisy measurements. Under the assumption that the measurements are sub-Gaussian, to recover any $k$-sparse $\theta_0$ ($k\ll d$) \emph{uniformly} up to an error $\varepsilon$ with high probability, the best known computationally tractable algorithm requires\footnote{Here, an algorithm is ``computationally tractable'' if it has provable convergence guarantees. The notation $\tilde{\mathcal{O}}(\cdot)$ omits a logarithm factor of $\varepsilon^{-1}$.} $m\geq\tilde{\mathcal{O}}(k\log d/\varepsilon^4)$. In this paper, we consider a new framework for the one-bit sensing problem where the sparsity is implicitly enforced via mapping a low dimensional representation $x_0$ through a known $n$-layer ReLU generative network $G:\mathbb{R}^k\rightarrow\mathbb{R}^d$. Such a framework poses low-dimensional priors on $\theta_0$ without a known basis. We propose to recover the target $G(x_0)$ via an unconstrained empirical risk minimization (ERM) problem under a much weaker \emph{sub-exponential measurement assumption}. For such a problem, we establish a joint statistical and computational analysis. In particular, we prove that the ERM estimator in this new framework achieves an improved statistical rate of $m=\tilde{\mathcal{O}} (kn\log d /\epsilon^2)$ recovering any $G(x_0)$ uniformly up to an error $\varepsilon$. Moreover, from the lens of computation, despite non-convexity, we prove that the objective of our ERM problem has no spurious stationary point, that is, any stationary point is equally good for recovering the true target up to scaling with a certain accuracy. Our analysis sheds some light on the possibility of inverting a deep generative model under partial and quantized measurements, complementing the recent success of using deep generative models for inverse problems.
|
Shuang Qiu · Xiaohan Wei · Zhuoran Yang 🔗 |
Fri 9:40 a.m. - 10:30 a.m.
|
Coffee Break
|
🔗 |
Fri 10:30 a.m. - 11:00 a.m.
|
Computational microscopy in scattering media
(
Talk
)
Computational imaging involves the joint design of imaging system hardware and software, optimizing across the entire pipeline from acquisition to reconstruction. Computers can replace bulky and expensive optics by solving computational inverse problems. This talk will describe new microscopes that use computational imaging to enable 3D fluorescence and phase measurement using image reconstruction algorithms that are based on large-scale nonlinear non-convex optimization combined with unrolled neural networks. We further discuss engineering of data capture for computational microscopes by end-to-end learned design. |
Laura Waller 🔗 |
Fri 11:00 a.m. - 11:30 a.m.
|
Denoising via Early Stopping
(
Talk
)
|
Mahdi Soltanolkotabi 🔗 |
Fri 11:30 a.m. - 12:00 p.m.
|
Neural Reparameterization Improves Structural Optimization
(
Talk
)
Structural optimization is a popular method for designing objects such as bridge trusses, airplane wings, and optical devices. Unfortunately, the quality of solutions depends heavily on how the problem is parameterized. In this paper, we propose using the implicit bias over functions induced by neural networks to improve the parameterization of structural optimization. Rather than directly optimizing densities on a grid, we instead optimize the parameters of a neural network which outputs those densities. This reparameterization leads to different and often better solutions. On a selection of 116 structural optimization tasks, our approach produces an optimal design 50% more often than the best baseline method. |
Stephan Hoyer · Jascha Sohl-Dickstein · Sam Greydanus 🔗 |
Fri 12:00 p.m. - 2:00 p.m.
|
Lunch Break
|
🔗 |
Fri 2:00 p.m. - 2:30 p.m.
|
Learning-Based Low-Rank Approximations
(
Talk
)
|
Piotr Indyk 🔗 |
Fri 2:30 p.m. - 3:00 p.m.
|
Blind Denoising, Self-Supervision, and Implicit Inverse Problems
(
Talk
)
We will discuss a self-supervised approach to the foundational inverse problem of denoising (Noise2Self). By taking advantage of statistical independence in the noise, we can estimate the mean-square error for a large class of deep architectures without access to ground truth. This allows us to train a neural network to denoise from noisy data alone, and also to compare between architectures, selecting one which will produce images with the lowest MSE. However, architectures with the same MSE performance can produce qualitatively different results, i.e., the hypersurface of images with fixed MSE is very heterogeneous. We will discuss ongoing work in understanding the types of artifacts which different denoising architectures give rise to. |
Joshua Batson 🔗 |
Fri 3:00 p.m. - 3:30 p.m.
|
Learning Regularizers from Data
(
Talk
)
Regularization techniques are widely employed in the solution of inverse problems in data analysis and scientific computing due to their effectiveness in addressing difficulties due to ill-posedness. In their most common manifestation, these methods take the form of penalty functions added to the objective in variational approaches for solving inverse problems. The purpose of the penalty function is to induce a desired structure in the solution, and these functions are specified based on prior domain-specific expertise. We consider the problem of learning suitable regularization functions from data in settings in which precise domain knowledge is not directly available; the objective is to identify a regularizer to promote the type of structure contained in the data. The regularizers obtained using our framework are specified as convex functions that can be computed efficiently via semidefinite programming. Our approach for learning such semidefinite regularizers combines recent techniques for rank minimization problems along with the Operator Sinkhorn procedure. (Joint work with Yong Sheng Soh) |
Venkat Chandrasekaran 🔗 |
Fri 4:15 p.m. - 6:00 p.m.
|
Poster Session
|
Jonathan Scarlett · Piotr Indyk · Ali Vakilian · Adrian Weller · Partha P Mitra · Benjamin Aubin · Bruno Loureiro · Florent Krzakala · Lenka Zdeborová · Kristina Monakhova · Joshua Yurtsever · Laura Waller · Hendrik Sommerhoff · Michael Moeller · Rushil Anirudh · Shuang Qiu · Xiaohan Wei · Zhuoran Yang · Jayaraman Thiagarajan · Salman Asif · Michael Gillhofer · Johannes Brandstetter · Sepp Hochreiter · Felix Petersen · Dhruv Patel · Assad Oberai · Akshay Kamath · Sushrut Karmalkar · Eric Price · Ali Ahmed · Zahra Kadkhodaie · Sreyas Mohan · Eero Simoncelli · Carlos Fernandez-Granda · Oscar Leong · Wesam Sakla · Rebecca Willett · Stephan Hoyer · Jascha Sohl-Dickstein · Sam Greydanus · Gauri Jagatap · Chinmay Hegde · Michael Kellman · Jonathan Tamir · Nouamane Laanait · Ousmane Dia · Mirco Ravanelli · Jonathan Binas · Negar Rostamzadeh · Shirin Jalali · Tiantian Fang · Alex Schwing · Sébastien Lachapelle · Philippe Brouillard · Tristan Deleu · Simon Lacoste-Julien · Stella Yu · Arya Mazumdar · Ankit Singh Rawat · Yue Zhao · Jianshu Chen · Xiaoyang Li · Hubert Ramsauer · Gabrio Rizzuti · Nikolaos Mitsakos · Dingzhou Cao · Thomas Strohmer · Yang Li · Pei Peng · Gregory Ongie
|
Author Information
Reinhard Heckel (TUM)
Paul Hand (Northeastern University)
Richard Baraniuk (Rice University)
Joan Bruna (NYU)
Alex Dimakis (University of Texas, Austin)
Deanna Needell (UCLA)
More from the Same Authors
-
2021 : An Extensible Benchmark Suite for Learning to Simulate Physical Systems »
Karl Otness · Arvi Gjoka · Joan Bruna · Daniele Panozzo · Benjamin Peherstorfer · Teseo Schneider · Denis Zorin -
2021 Spotlight: Offline RL Without Off-Policy Evaluation »
David Brandfonbrener · Will Whitney · Rajesh Ranganath · Joan Bruna -
2021 : Quantile Filtered Imitation Learning »
David Brandfonbrener · Will Whitney · Rajesh Ranganath · Joan Bruna -
2022 : Score-based Seismic Inverse Problems »
Sriram Ravula · Dimitri Voytan · Elad Liebman · Ram Tuvi · Yash Gandhi · Hamza Ghani · Alex Ardel · Mrinal Sen · Alex Dimakis -
2022 : Investigating Reproducibility from the Decision Boundary Perspective. »
Gowthami Somepalli · Arpit Bansal · Liam Fowl · Ping-yeh Chiang · Yehuda Dar · Richard Baraniuk · Micah Goldblum · Tom Goldstein -
2022 : Using quadratic equations for overparametrized models »
Shuang Li · William Swartworth · Martin Takac · Deanna Needell · Robert Gower -
2022 : Retrieval-based Controllable Molecule Generation »
Jack Wang · Weili Nie · Zhuoran Qiao · Chaowei Xiao · Richard Baraniuk · Anima Anandkumar -
2022 : HotProtein: A Novel Framework for Protein Thermostability Prediction and Editing »
Tianlong Chen · Chengyue Gong · Daniel Diaz · Xuxi Chen · Jordan Wells · Qiang Liu · Zhangyang Wang · Andrew Ellington · Alex Dimakis · Adam Klivans -
2022 : Exact Visualization of Deep Neural Network Geometry and Decision Boundary »
Ahmed Imtiaz Humayun · Randall Balestriero · Richard Baraniuk -
2022 : Discovering the Hidden Vocabulary of DALLE-2 »
Giannis Daras · Alex Dimakis -
2022 : Multiresolution Textual Inversion »
Giannis Daras · Alex Dimakis -
2022 : Using Deep Learning and Macroscopic Imaging of Porcine Heart Valve Leaflets to Predict Uniaxial Stress-Strain Responses »
Luis Victor · CJ Barberan · Richard Baraniuk · Jane Grande-Allen -
2023 Poster: Ambient Diffusion: Learning Clean Distributions from Corrupted Data »
Giannis Daras · Kulin Nitinkumar Shah · Yuval Dagan · Aravind Gollakota · Alex Dimakis · Adam Klivans -
2023 Poster: A Neural Collapse Perspective on Feature Evolution in Graph Neural Networks »
Vignesh Kothapalli · Tom Tirer · Joan Bruna -
2023 Poster: Inverse Dynamics Pretraining Learns Good Representations for Multitask Imitation »
David Brandfonbrener · Ofir Nachum · Joan Bruna -
2023 Poster: Solving Inverse Problems Provably via Posterior Sampling with Latent Diffusion Models »
Litu Rout · Negin Raoof · Giannis Daras · Constantine Caramanis · Alex Dimakis · Sanjay Shakkottai -
2023 Poster: Nearly Optimal Bounds for Cyclic Forgetting »
William Swartworth · Deanna Needell · Rachel Ward · Mark Kong · Halyun Jeong -
2023 Poster: Training shallow ReLU networks on noisy data using hinge loss: when do we overfit and is it benign? »
Erin George · Michael Murray · William Swartworth · Deanna Needell -
2023 Poster: On Single-Index Models beyond Gaussian Data »
Aaron Zweig · Loucas PILLAUD-VIVIEN · Joan Bruna -
2023 Poster: Martingale Diffusion Models: Mitigating Sampling Drift by Learning to be Consistent »
Giannis Daras · Yuval Dagan · Alex Dimakis · Constantinos Daskalakis -
2023 Poster: Mitigating Over-smoothing in Transformers via Regularized Nonlocal Functionals »
Tam Nguyen · Tan Nguyen · Richard Baraniuk -
2023 Poster: DataComp: In search of the next generation of multimodal datasets »
Samir Yitzhak Gadre · Gabriel Ilharco · Alex Fang · Jonathan Hayase · Georgios Smyrnis · Thao Nguyen · Ryan Marten · Mitchell Wortsman · Dhruba Ghosh · Jieyu Zhang · Eyal Orgad · Rahim Entezari · Giannis Daras · Sarah Pratt · Vivek Ramanujan · Yonatan Bitton · Kalyani Marathe · Stephen Mussmann · Richard Vencu · Mehdi Cherti · Ranjay Krishna · Pang Wei Koh · Olga Saukh · Alexander Ratner · Shuran Song · Hannaneh Hajishirzi · Ali Farhadi · Romain Beaumont · Sewoong Oh · Alex Dimakis · Jenia Jitsev · Yair Carmon · Vaishaal Shankar · Ludwig Schmidt -
2023 Oral: DataComp: In search of the next generation of multimodal datasets »
Samir Yitzhak Gadre · Gabriel Ilharco · Alex Fang · Jonathan Hayase · Georgios Smyrnis · Thao Nguyen · Ryan Marten · Mitchell Wortsman · Dhruba Ghosh · Jieyu Zhang · Eyal Orgad · Rahim Entezari · Giannis Daras · Sarah Pratt · Vivek Ramanujan · Yonatan Bitton · Kalyani Marathe · Stephen Mussmann · Richard Vencu · Mehdi Cherti · Ranjay Krishna · Pang Wei Koh · Olga Saukh · Alexander Ratner · Shuran Song · Hannaneh Hajishirzi · Ali Farhadi · Romain Beaumont · Sewoong Oh · Alex Dimakis · Jenia Jitsev · Yair Carmon · Vaishaal Shankar · Ludwig Schmidt -
2023 Workshop: Learning-Based Solutions for Inverse Problems »
Shirin Jalali · christopher metzler · Ajil Jalal · Jon Tamir · Reinhard Heckel · Paul Hand · Arian Maleki · Richard Baraniuk -
2022 Poster: Exponential Separations in Symmetric Neural Networks »
Aaron Zweig · Joan Bruna -
2022 Poster: When does return-conditioned supervised learning work for offline reinforcement learning? »
David Brandfonbrener · Alberto Bietti · Jacob Buckman · Romain Laroche · Joan Bruna -
2022 Poster: Multitasking Models are Robust to Structural Failure: A Neural Model for Bilingual Cognitive Reserve »
Giannis Daras · Negin Raoof · Zoi Gkalitsiou · Alex Dimakis -
2022 Poster: Zonotope Domains for Lagrangian Neural Network Verification »
Matt Jordan · Jonathan Hayase · Alex Dimakis · Sewoong Oh -
2022 Poster: On Non-Linear operators for Geometric Deep Learning »
Grégoire Sergeant-Perthuis · Jakob Maier · Joan Bruna · Edouard Oyallon -
2022 Poster: Online Nonnegative CP-dictionary Learning for Markovian Data »
Hanbaek Lyu · Christopher Strohmeier · Deanna Needell -
2022 Poster: Parameters or Privacy: A Provable Tradeoff Between Overparameterization and Membership Inference »
Jasper Tan · Blake Mason · Hamid Javadi · Richard Baraniuk -
2022 Poster: Learning single-index models with shallow neural networks »
Alberto Bietti · Joan Bruna · Clayton Sanford · Min Jae Song -
2021 : Alex Dimakis Talk »
Alex Dimakis -
2021 Workshop: Workshop on Deep Learning and Inverse Problems »
Reinhard Heckel · Paul Hand · Rebecca Willett · christopher metzler · Mahdi Soltanolkotabi -
2021 Poster: On the Sample Complexity of Learning under Geometric Stability »
Alberto Bietti · Luca Venturi · Joan Bruna -
2021 Poster: Inverse Problems Leveraging Pre-trained Contrastive Representations »
Sriram Ravula · Georgios Smyrnis · Matt Jordan · Alex Dimakis -
2021 Poster: The Flip Side of the Reweighted Coin: Duality of Adaptive Dropout and Regularization »
Daniel LeJeune · Hamid Javadi · Richard Baraniuk -
2021 Poster: Robust Compressed Sensing MRI with Deep Generative Priors »
Ajil Jalal · Marius Arvinte · Giannis Daras · Eric Price · Alex Dimakis · Jon Tamir -
2021 Poster: On the Cryptographic Hardness of Learning Single Periodic Neurons »
Min Jae Song · Ilias Zadik · Joan Bruna -
2021 Poster: Interpolation can hurt robust generalization even when there is no noise »
Konstantin Donhauser · Alexandru Tifrea · Michael Aerni · Reinhard Heckel · Fanny Yang -
2021 Poster: Score-based Generative Neural Networks for Large-Scale Optimal Transport »
Grady Daniels · Tyler Maunu · Paul Hand -
2021 Poster: Offline RL Without Off-Policy Evaluation »
David Brandfonbrener · Will Whitney · Rajesh Ranganath · Joan Bruna -
2020 : Invited speaker: Online nonnegative matrix factorization for Markovian and other real data, Deanna Needell and Hanbaek Lyu »
Hanbake Lyu · Deanna Needell -
2020 : Opening Remarks »
Reinhard Heckel · Paul Hand · Soheil Feizi · Lenka Zdeborová · Richard Baraniuk -
2020 Workshop: Workshop on Deep Learning and Inverse Problems »
Reinhard Heckel · Paul Hand · Richard Baraniuk · Lenka Zdeborová · Soheil Feizi -
2020 : Newcomer presentation »
Reinhard Heckel · Paul Hand -
2020 Poster: A mean-field analysis of two-player zero-sum games »
Carles Domingo-Enrich · Samy Jelassi · Arthur Mensch · Grant Rotskoff · Joan Bruna -
2020 Poster: Can Graph Neural Networks Count Substructures? »
Zhengdao Chen · Lei Chen · Soledad Villar · Joan Bruna -
2020 Poster: Analytical Probability Distributions and Exact Expectation-Maximization for Deep Generative Networks »
Randall Balestriero · Sebastien PARIS · Richard Baraniuk -
2020 Poster: SMYRF - Efficient Attention using Asymmetric Clustering »
Giannis Daras · Nikita Kitaev · Augustus Odena · Alex Dimakis -
2020 Session: Orals & Spotlights Track 26: Graph/Relational/Theory »
Joan Bruna · Cassio de Campos -
2020 Poster: MomentumRNN: Integrating Momentum into Recurrent Neural Networks »
Tan Nguyen · Richard Baraniuk · Andrea Bertozzi · Stanley Osher · Bao Wang -
2020 Poster: IDEAL: Inexact DEcentralized Accelerated Augmented Lagrangian Method »
Yossi Arjevani · Joan Bruna · Bugra Can · Mert Gurbuzbalaban · Stefanie Jegelka · Hongzhou Lin -
2020 Poster: Applications of Common Entropy for Causal Inference »
Murat Kocaoglu · Sanjay Shakkottai · Alex Dimakis · Constantine Caramanis · Sriram Vishwanath -
2020 Spotlight: IDEAL: Inexact DEcentralized Accelerated Augmented Lagrangian Method »
Yossi Arjevani · Joan Bruna · Bugra Can · Mert Gurbuzbalaban · Stefanie Jegelka · Hongzhou Lin -
2020 Poster: Exactly Computing the Local Lipschitz Constant of ReLU Networks »
Matt Jordan · Alex Dimakis -
2020 Poster: Nonasymptotic Guarantees for Spiked Matrix Recovery with Generative Priors »
Jorio Cocola · Paul Hand · Vlad Voroninski -
2020 Poster: A Dynamical Central Limit Theorem for Shallow Neural Networks »
Zhengdao Chen · Grant Rotskoff · Joan Bruna · Eric Vanden-Eijnden -
2020 Poster: Robust compressed sensing using generative models »
Ajil Jalal · Liu Liu · Alex Dimakis · Constantine Caramanis -
2019 : Poster Session »
Pravish Sainath · Mohamed Akrout · Charles Delahunt · Nathan Kutz · Guangyu Robert Yang · Joseph Marino · L F Abbott · Nicolas Vecoven · Damien Ernst · andrew warrington · Michael Kagan · Kyunghyun Cho · Kameron Harris · Leopold Grinberg · John J. Hopfield · Dmitry Krotov · Taliah Muhammad · Erick Cobos · Edgar Walker · Jacob Reimer · Andreas Tolias · Alexander Ecker · Janaki Sheth · Yu Zhang · Maciej Wołczyk · Jacek Tabor · Szymon Maszke · Roman Pogodin · Dane Corneil · Wulfram Gerstner · Baihan Lin · Guillermo Cecchi · Jenna M Reinen · Irina Rish · Guillaume Bellec · Darjan Salaj · Anand Subramoney · Wolfgang Maass · Yueqi Wang · Ari Pakman · Jin Hyung Lee · Liam Paninski · Bryan Tripp · Colin Graber · Alex Schwing · Luke Prince · Gabriel Ocker · Michael Buice · Benjamin Lansdell · Konrad Kording · Jack Lindsey · Terrence Sejnowski · Matthew Farrell · Eric Shea-Brown · Nicolas Farrugia · Victor Nepveu · Jiwoong Im · Kristin Branson · Brian Hu · Ramakrishnan Iyer · Stefan Mihalas · Sneha Aenugu · Hananel Hazan · Sihui Dai · Tan Nguyen · Doris Tsao · Richard Baraniuk · Anima Anandkumar · Hidenori Tanaka · Aran Nayebi · Stephen Baccus · Surya Ganguli · Dean Pospisil · Eilif Muller · Jeffrey S Cheng · Gaël Varoquaux · Kamalaker Dadi · Dimitrios C Gklezakos · Rajesh PN Rao · Anand Louis · Christos Papadimitriou · Santosh Vempala · Naganand Yadati · Daniel Zdeblick · Daniela M Witten · Nicholas Roberts · Vinay Prabhu · Pierre Bellec · Poornima Ramesh · Jakob H Macke · Santiago Cadena · Guillaume Bellec · Franz Scherr · Owen Marschall · Robert Kim · Hannes Rapp · Marcio Fonseca · Oliver Armitage · Jiwoong Im · Thomas Hardcastle · Abhishek Sharma · Wyeth Bair · Adrian Valente · Shane Shang · Merav Stern · Rutuja Patil · Peter Wang · Sruthi Gorantla · Peter Stratton · Tristan Edwards · Jialin Lu · Martin Ester · Yurii Vlasov · Siavash Golkar -
2019 : Coffee Break + Poster Session II »
Niki Parmar · Haraldur Hallgrimsson · Christian Kames · Arijit Patra · Abdullah-Al-Zubaer Imran · Junlin Yang · David Zimmerer · Arunava Chakravarty · Lawrence Schobs · Alexej Gossmann · TUNG-I CHEN · Tarun Dutt · Li Yao · Octavio Eleazar Martinez Manzanera · Johannes Pinckaers · Mehmet Ufuk Dalmis · Deepak Gupta · Nandinee Haq · David Ruhe · Jevgenij Gamper · Alfredo De Goyeneche Macaya · Jonathan Tamir · Byunghwan Jeon · SUBBAREDDY OOTA · Reinhard Heckel · Pamela K Douglas · Oleksii Sidorov · Ke Wang · Melanie Garcia · Ravi Soni · Ankita Shukla -
2019 : Poster and Coffee Break 1 »
Aaron Sidford · Aditya Mahajan · Alejandro Ribeiro · Alex Lewandowski · Ali H Sayed · Ambuj Tewari · Angelika Steger · Anima Anandkumar · Asier Mujika · Hilbert J Kappen · Bolei Zhou · Byron Boots · Chelsea Finn · Chen-Yu Wei · Chi Jin · Ching-An Cheng · Christina Yu · Clement Gehring · Craig Boutilier · Dahua Lin · Daniel McNamee · Daniel Russo · David Brandfonbrener · Denny Zhou · Devesh Jha · Diego Romeres · Doina Precup · Dominik Thalmeier · Eduard Gorbunov · Elad Hazan · Elena Smirnova · Elvis Dohmatob · Emma Brunskill · Enrique Munoz de Cote · Ethan Waldie · Florian Meier · Florian Schaefer · Ge Liu · Gergely Neu · Haim Kaplan · Hao Sun · Hengshuai Yao · Jalaj Bhandari · James A Preiss · Jayakumar Subramanian · Jiajin Li · Jieping Ye · Jimmy Smith · Joan Bas Serrano · Joan Bruna · John Langford · Jonathan Lee · Jose A. Arjona-Medina · Kaiqing Zhang · Karan Singh · Yuping Luo · Zafarali Ahmed · Zaiwei Chen · Zhaoran Wang · Zhizhong Li · Zhuoran Yang · Ziping Xu · Ziyang Tang · Yi Mao · David Brandfonbrener · Shirli Di-Castro · Riashat Islam · Zuyue Fu · Abhishek Naik · Saurabh Kumar · Benjamin Petit · Angeliki Kamoutsi · Simone Totaro · Arvind Raghunathan · Rui Wu · Donghwan Lee · Dongsheng Ding · Alec Koppel · Hao Sun · Christian Tjandraatmadja · Mahdi Karami · Jincheng Mei · Chenjun Xiao · Junfeng Wen · Zichen Zhang · Ross Goroshin · Mohammad Pezeshki · Jiaqi Zhai · Philip Amortila · Shuo Huang · Mariya Vasileva · El houcine Bergou · Adel Ahmadyan · Haoran Sun · Sheng Zhang · Lukas Gruber · Yuanhao Wang · Tetiana Parshakova -
2019 : Surya Ganguli, Yasaman Bahri, Florent Krzakala moderated by Lenka Zdeborova »
Florent Krzakala · Yasaman Bahri · Surya Ganguli · Lenka Zdeborová · Adji Bousso Dieng · Joan Bruna -
2019 : Poster Spotlight 1 »
David Brandfonbrener · Joan Bruna · Tom Zahavy · Haim Kaplan · Yishay Mansour · Nikos Karampatziakis · John Langford · Paul Mineiro · Donghwan Lee · Niao He -
2019 : Opening Remarks »
Reinhard Heckel · Paul Hand · Alex Dimakis · Joan Bruna · Deanna Needell · Richard Baraniuk -
2019 Workshop: Information Theory and Machine Learning »
Shengjia Zhao · Jiaming Song · Yanjun Han · Kristy Choi · Pratyusha Kalluri · Ben Poole · Alex Dimakis · Jiantao Jiao · Tsachy Weissman · Stefano Ermon -
2019 Poster: Gradient Dynamics of Shallow Univariate ReLU Networks »
Francis Williams · Matthew Trager · Daniele Panozzo · Claudio Silva · Denis Zorin · Joan Bruna -
2019 Poster: On the Expressive Power of Deep Polynomial Neural Networks »
Joe Kileel · Matthew Trager · Joan Bruna -
2019 Poster: Global Guarantees for Blind Demodulation with Generative Priors »
Paul Hand · Babhru Joshi -
2019 Poster: Inverting Deep Generative models, One layer at a time »
Qi Lei · Ajil Jalal · Inderjit Dhillon · Alex Dimakis -
2019 Poster: Finding the Needle in the Haystack with Convolutions: on the benefits of architectural bias »
Stéphane d'Ascoli · Levent Sagun · Giulio Biroli · Joan Bruna -
2019 Poster: Provable Certificates for Adversarial Examples: Fitting a Ball in the Union of Polytopes »
Matt Jordan · Justin Lewis · Alex Dimakis -
2019 Poster: Primal-Dual Block Generalized Frank-Wolfe »
Qi Lei · JIACHENG ZHUO · Constantine Caramanis · Inderjit Dhillon · Alex Dimakis -
2019 Poster: On the equivalence between graph isomorphism testing and function approximation with GNNs »
Zhengdao Chen · Soledad Villar · Lei Chen · Joan Bruna -
2019 Poster: Sparse Logistic Regression Learns All Discrete Pairwise Graphical Models »
Shanshan Wu · Sujay Sanghavi · Alex Dimakis -
2019 Spotlight: Sparse Logistic Regression Learns All Discrete Pairwise Graphical Models »
Shanshan Wu · Sujay Sanghavi · Alex Dimakis -
2019 Poster: Stability of Graph Scattering Transforms »
Fernando Gama · Alejandro Ribeiro · Joan Bruna -
2019 Poster: Learning Distributions Generated by One-Layer ReLU Networks »
Shanshan Wu · Alex Dimakis · Sujay Sanghavi -
2019 Poster: The Geometry of Deep Networks: Power Diagram Subdivision »
Randall Balestriero · Romain Cosentino · Behnaam Aazhang · Richard Baraniuk -
2018 : Invited Talk 3 »
Joan Bruna -
2018 Workshop: Integration of Deep Learning Theories »
Richard Baraniuk · Anima Anandkumar · Stephane Mallat · Ankit Patel · nhật Hồ -
2018 : Panel Discussion »
Richard Baraniuk · Maarten V. de Hoop · Paul A Johnson -
2018 : Joan Bruna »
Joan Bruna -
2018 : Introduction »
Laura Pyrak-Nolte · James Rustad · Richard Baraniuk -
2018 Workshop: Machine Learning for Geophysical & Geochemical Signals »
Laura Pyrak-Nolte · James Rustad · Richard Baraniuk -
2018 Poster: A convex program for bilinear inversion of sparse vectors »
Alireza Aghasi · Ali Ahmed · Paul Hand · Babhru Joshi -
2018 Poster: Experimental Design for Cost-Aware Learning of Causal Graphs »
Erik Lindgren · Murat Kocaoglu · Alex Dimakis · Sriram Vishwanath -
2018 Poster: Blind Deconvolutional Phase Retrieval via Convex Programming »
Ali Ahmed · Alireza Aghasi · Paul Hand -
2018 Spotlight: Blind Deconvolutional Phase Retrieval via Convex Programming »
Ali Ahmed · Alireza Aghasi · Paul Hand -
2018 Poster: Phase Retrieval Under a Generative Prior »
Paul Hand · Oscar Leong · Vlad Voroninski -
2018 Oral: Phase Retrieval Under a Generative Prior »
Paul Hand · Oscar Leong · Vlad Voroninski -
2017 Workshop: NIPS Highlights (MLTrain), Learn How to code a paper with state of the art frameworks »
Alex Dimakis · Nikolaos Vasiloglou · Guy Van den Broeck · Alexander Ihler · Assaf Araki -
2017 Workshop: Advances in Modeling and Learning Interactions from Complex Data »
Gautam Dasarathy · Mladen Kolar · Richard Baraniuk -
2017 Poster: Streaming Weak Submodularity: Interpreting Neural Networks on the Fly »
Ethan Elenberg · Alex Dimakis · Moran Feldman · Amin Karbasi -
2017 Oral: Streaming Weak Submodularity: Interpreting Neural Networks on the Fly »
Ethan Elenberg · Alex Dimakis · Moran Feldman · Amin Karbasi -
2017 Poster: Learned D-AMP: Principled Neural Network based Compressive Image Recovery »
Chris Metzler · Ali Mousavi · Richard Baraniuk -
2017 Poster: Model-Powered Conditional Independence Test »
Rajat Sen · Ananda Theertha Suresh · Karthikeyan Shanmugam · Alex Dimakis · Sanjay Shakkottai -
2017 Tutorial: Geometric Deep Learning on Graphs and Manifolds »
Michael Bronstein · Joan Bruna · arthur szlam · Xavier Bresson · Yann LeCun -
2016 Workshop: Machine Learning for Education »
Richard Baraniuk · Jiquan Ngiam · Christoph Studer · Phillip Grimaldi · Andrew Lan -
2016 Poster: Leveraging Sparsity for Efficient Submodular Data Summarization »
Erik Lindgren · Shanshan Wu · Alex Dimakis -
2016 Poster: A Probabilistic Framework for Deep Learning »
Ankit Patel · Tan Nguyen · Richard Baraniuk -
2016 Poster: Single Pass PCA of Matrix Products »
Shanshan Wu · Srinadh Bhojanapalli · Sujay Sanghavi · Alex Dimakis -
2015 : Low-dimensional inference with high-dimensional data »
Richard Baraniuk -
2015 : Probabilistic Theory of Deep Learning »
Richard Baraniuk -
2015 Poster: Orthogonal NMF through Subspace Exploration »
Megasthenis Asteris · Dimitris Papailiopoulos · Alex Dimakis -
2015 Poster: Sparse PCA via Bipartite Matchings »
Megasthenis Asteris · Dimitris Papailiopoulos · Anastasios Kyrillidis · Alex Dimakis -
2015 Poster: Learning Causal Graphs with Small Interventions »
Karthikeyan Shanmugam · Murat Kocaoglu · Alex Dimakis · Sriram Vishwanath -
2014 Workshop: Human Propelled Machine Learning »
Richard Baraniuk · Michael Mozer · Divyanshu Vats · Christoph Studer · Andrew E Waters · Andrew Lan -
2014 Poster: Exploiting Linear Structure Within Convolutional Networks for Efficient Evaluation »
Emily Denton · Wojciech Zaremba · Joan Bruna · Yann LeCun · Rob Fergus -
2014 Poster: Sparse Polynomial Learning and Graph Sketching »
Murat Kocaoglu · Karthikeyan Shanmugam · Alex Dimakis · Adam Klivans -
2014 Poster: On the Information Theoretic Limits of Learning Ising Models »
Rashish Tandon · Karthikeyan Shanmugam · Pradeep Ravikumar · Alex Dimakis -
2014 Oral: Sparse Polynomial Learning and Graph Sketching »
Murat Kocaoglu · Karthikeyan Shanmugam · Alex Dimakis · Adam Klivans -
2013 Poster: When in Doubt, SWAP: High-Dimensional Sparse Recovery from Correlated Measurements »
Divyanshu Vats · Richard Baraniuk -
2011 Poster: SpaRCS: Recovering low-rank and sparse matrices from compressive measurements »
Andrew E Waters · Aswin C Sankaranarayanan · Richard Baraniuk -
2009 Workshop: Manifolds, sparsity, and structured models: When can low-dimensional geometry really help? »
Richard Baraniuk · Volkan Cevher · Mark A Davenport · Piotr Indyk · Bruno Olshausen · Michael B Wakin -
2008 Poster: Sparse Signal Recovery Using Markov Random Fields »
Volkan Cevher · Marco F Duarte · Chinmay Hegde · Richard Baraniuk -
2008 Spotlight: Sparse Signal Recovery Using Markov Random Fields »
Volkan Cevher · Marco F Duarte · Chinmay Hegde · Richard Baraniuk -
2007 Poster: Random Projections for Manifold Learning »
Chinmay Hegde · Richard Baraniuk