Timezone: »
Whether it is biological networks of proteins and genes or technological ones like sensor networks and the Internet, we are surrounded today by complex systems composed of entities interacting with and affecting each other. An urgent need has therefore emerged for developing novel techniques for modeling, learning, and conducting inference in such networked systems. Consequently, we have seen progress from a variety of disciplines in both fundamental methodology and in applications of such methods to practical problems. However, much work remains to be done, and a unifying and principled framework for dealing with these problems remains elusive. This workshop aims to bring together theoreticians and practitioners in order to both chart out recent advances and to discuss new directions in understanding interactions in large and complex systems. NIPS, with its attendance by a broad and cross-disciplinary set of researchers offers the ideal venue for this exchange of ideas.
The workshop will feature a mix of contributed talks, contributed posters, and invited talks by leading researchers from diverse backgrounds working in these areas. We will also have a specific segment of the schedule reserved for the presentation of open problems, and will have plenty of time for discussions where we will explicitly look to spark off collaborations amongst the attendees.
We encourage submissions in a variety of topics including, but not limited to:
* Computationally and statistically efficient techniques for learning graphical models from data including convex, greedy, and active approaches.
* New probabilistic models of interacting systems including nonparametric and exponential family graphical models.
* Community detection algorithms including semi-supervised and adaptive approaches.
* Techniques for modeling and learning causal relationships from data.
* Bayesian techniques for modeling complex data and causal relationships.
* Kernel methods for directed and undirected graphical models.
* Applications of these methods in various areas like sensor networks, computer networks, social networks, and biological networks like phylogenetic trees and graphs.
Successful submissions will emphasize the role of statistical and computational learning to the problem at hand. The author(s) of these submissions will be invited to present their work as either a poster or as a contributed talk. Alongside these, we also solicit submissions of open problems that go with the theme of the workshop. The author(s) of the selected open problems will be able to present the problem to the attendees and solicit feedback/collaborations.
Fri 8:00 a.m. - 8:05 a.m.
|
Opening Remarks
|
Gautam Dasarathy 🔗 |
Fri 8:05 a.m. - 8:40 a.m.
|
Community Detection and Invariance to Distribution
(
Invited Talk
)
We consider the problem of recovering a hidden community of size K from a graph where edges between members of the community have label X drawn i.i.d. according to P and all other edges have labels drawn i.i.d. according to Q. The information limits for this problem were characterized by Hajek-Wu-Xu in 2016 in terms of the KL-divergence between P and Q. We complement their work by showing that for a broad class of distributions P and Q the computational difficulty is also determined by the KL-divergence. We additionally show how to reduce general P and Q to the case P = Ber(p) and Q = Ber(q) and vice versa, giving a direct computational equivalence (up to polynomial time). |
Guy Bresler 🔗 |
Fri 8:40 a.m. - 9:00 a.m.
|
Edge Exchangeable Temporal Network Models
(
Contributed Talk
)
We propose a dynamic edge exchangeable network model that can capture sparse connections observed in real temporal networks, in contrast to existing models which are dense. The model achieved superior link prediction accuracy when compared to a dynamic variant of the blockmodel, and is able to extract interpretable time-varying community structures. In addition to sparsity, the model accounts for the effect of social influence on vertices’ future behaviours. Compared to the dynamic blockmodels, our model has a smaller latent space. The compact latent space requires a smaller number of parameters to be estimated in variational inference and results in a computationally friendly inference algorithm. |
Yin Cheng Ng 🔗 |
Fri 9:00 a.m. - 9:20 a.m.
|
A Data-Driven Sparse-Learning Approach to Model Reduction in Chemical Reaction Networks
(
Contributed Talk
)
In this paper, we propose an optimization-based sparse learning approach to iden- tify the set of most influential reactions in a chemical reaction network. This reduced set of reactions is then employed to construct a reduced chemical reaction mechanism, which is relevant to chemical interaction network modeling. The problem of identifying influential reactions is first formulated as a mixed-integer quadratic program, and then a relaxation method is leveraged to reduce the compu- tational complexity of our approach. Qualitative and quantitative validation of the sparse encoding approach demonstrates that the model captures important network structural properties with moderate computational load. |
FARSHAD HARIRCHI 🔗 |
Fri 9:20 a.m. - 10:00 a.m.
|
Poster Spotlights
|
🔗 |
Fri 10:00 a.m. - 11:00 a.m.
|
Coffee Break + Posters
(
Break/Poster Setup
)
|
🔗 |
Fri 11:00 a.m. - 11:35 a.m.
|
Your dreams may come true with MTP2
(
Invited Talk
)
We study maximum likelihood estimation for exponential families that are multivariate totally positive of order two (MTP2). Such distributions appear in the context of ferromagnetism in the Ising model and various latent models, as for example Brownian motion tree models used in phylogenetics. We show that maximum likelihood estimation for MTP2 exponential families is a convex optimization problem. For quadratic exponential families such as Ising models and Gaussian graphical models, we show that MTP2 implies sparsity of the underlying graph without the need of a tuning parameter. In addition, we characterize a subgraph and a supergraph of Gaussian graphical models under MTP2. Moreover, we show that the MLE always exists even in the high-dimensional setting. These properties make MTP2 constraints an intriguing alternative to methods for learning sparse graphical models such as the graphical lasso. |
Caroline Uhler 🔗 |
Fri 11:35 a.m. - 11:55 a.m.
|
Estimating Mixed Memberships with Sharp Eigenvector Deviations
(
Contributed Talk
)
Real world networks often have nodes belonging to multiple communities. We consider the detection of overlapping communities under the popular Mixed Membership Stochastic Blockmodel (MMSB). Using the inherent geometry of this model, we link the inference of overlapping communities to the problem of finding corners in a noisy rotated and scaled simplex, for which consistent algorithms exist. We use this as a building block for our algorithm to infer the community memberships of each node, and prove its consistency. As a byproduct of our analysis, we derive sharp row-wise eigenvector deviation bounds, and provide a cleaning step that improves the performance drastically for sparse networks. We also propose both necessary and sufficient conditions for identifiability of the model, while existing methods typically present sufficient conditions. The empirical performance of our method is shown using simulated and real datasets scaling up to 100,000 nodes. |
Xueyu Mao 🔗 |
Fri 12:00 p.m. - 2:00 p.m.
|
Lunch Break
|
🔗 |
Fri 2:00 p.m. - 2:35 p.m.
|
Recovering Latent Causal Relations from Times Series Data
(
Invited Talk
)
Discovering causal relationships from data is a challenging problem that is exacerbated when some of the variables of interests are latent. In this talk, we discuss the problem of learning the support of transition matrix between random processes in a Vector Autoregressive (VAR) model from samples when a subset of the processes are latent. It is well known that ignoring the effect of the latent processes may lead to very different estimates of the influences even among observed processes. We are not only interested in identifying the influences among the observed processes, but also aim at learning those between the latent ones, and those from the latent to the observed ones. We show that the support of transition matrix among the observed processes and lengths of all latent paths between any two observed processes can be identified successfully under some conditions on the VAR model. Our results apply to both non-Gaussian and Gaussian cases, and experimental results on various synthetic and real-world datasets validate our theoretical findings. |
Negar Kiyavash 🔗 |
Fri 2:35 p.m. - 2:55 p.m.
|
Learning High-Dimensional DAGs: Provable Statistical Guarantees and Scalable Approximation
(
Contributed Talk
)
|
🔗 |
Fri 4:00 p.m. - 4:35 p.m.
|
Conditional Densities and Efficient Models in Infinite Exponential Families
(
Invited Talk
)
The exponential family is one of the most powerful and widely used classes of models in statistics. A method was recently developed to fit this model when the natural parameter and sufficient statistic are infinite dimensional, using a score matching approach. The infinite exponential family is a natural generalisation of the finite case, much like the Gaussian and Dirichlet processes generalise their respective finite modfels. In this talk, I'll describe two recent results which make this model more applicable in practice, by reducing the computational burden and improving performance for high-dimensional data. The firsrt is a Nytsrom-like approximation to the full solution. We prove that this approximate solution has the same consistency and convergence rates as the full-rank solution (exactly in Fisher distance, and nearly in other distances), with guarantees on the degree of cost and storage reduction. The second result is a generalisation of the model family to the conditional case, again with consistency guarantees. In experiments, the conditional model generally outperforms a competing approach with consistency guarantees, and is competitive with a deep conditional density model on datasets that exhibit abrupt transitions and heteroscedasticity. |
Arthur Gretton 🔗 |
Fri 4:35 p.m. - 4:55 p.m.
|
The Expxorcist: Nonparametric Graphical Models Via Conditional Exponential Densities
(
Contributed Talk
)
Non-parametric multivariate density estimation faces strong statistical and computational bottlenecks, and the more practical approaches impose near-parametric assumptions on the form of the density functions. In this paper, we leverage recent developments to propose a class of non-parametric models which have very attractive computational and statistical properties. |
Arun Suggala 🔗 |
Fri 4:55 p.m. - 5:30 p.m.
|
Mathematical and Computational challenges in Reconstructing Evolution
(
Invited Talk
)
Reconstructing evolutionary histories is a basic step in much biological discovery, as well as in historical linguistics and other domains. Inference methods based on mathematical models of evolution have been used to make substantial advances, including in understanding the early origins of life, to predicting protein structures and functions, and to addressing questions such as "Where did the Indo-European languages begin?" In this talk, I will describe the current state of the art in phylogeny estimation in these domains, what is understood from a mathematical perspective, and identify fascinating open problems where novel mathematical research - drawing from graph theory, algorithms, and probability theory - is needed. This talk will be accessible to mathematicians, computer scientists, and probabilists, and does not require any knowledge in biology. |
Tandy Warnow 🔗 |
Author Information
Gautam Dasarathy (Rice University)
Mladen Kolar (University of Chicago)
Richard Baraniuk (Rice University)
More from the Same Authors
-
2022 : Adaptive Inexact Sequential Quadratic Programming via Iterative Randomized Sketching »
Ilgee Hong · Sen Na · Mladen Kolar -
2022 : Investigating Reproducibility from the Decision Boundary Perspective. »
Gowthami Somepalli · Arpit Bansal · Liam Fowl · Ping-yeh Chiang · Yehuda Dar · Richard Baraniuk · Micah Goldblum · Tom Goldstein -
2022 : Retrieval-based Controllable Molecule Generation »
Jack Wang · Weili Nie · Zhuoran Qiao · Chaowei Xiao · Richard Baraniuk · Anima Anandkumar -
2022 : Exact Visualization of Deep Neural Network Geometry and Decision Boundary »
Ahmed Imtiaz Humayun · Randall Balestriero · Richard Baraniuk -
2022 : Using Deep Learning and Macroscopic Imaging of Porcine Heart Valve Leaflets to Predict Uniaxial Stress-Strain Responses »
Luis Victor · CJ Barberan · Richard Baraniuk · Jane Grande-Allen -
2023 Poster: Mitigating Over-smoothing in Transformers via Regularized Nonlocal Functionals »
Tam Nguyen · Tan Nguyen · Richard Baraniuk -
2023 Workshop: Learning-Based Solutions for Inverse Problems »
Shirin Jalali · christopher metzler · Ajil Jalal · Jon Tamir · Reinhard Heckel · Paul Hand · Arian Maleki · Richard Baraniuk -
2022 Poster: A Nonconvex Framework for Structured Dynamic Covariance Recovery »
Katherine Tsai · Mladen Kolar · Sanmi Koyejo -
2022 Poster: Parameters or Privacy: A Provable Tradeoff Between Overparameterization and Membership Inference »
Jasper Tan · Blake Mason · Hamid Javadi · Richard Baraniuk -
2021 Poster: The Flip Side of the Reweighted Coin: Duality of Adaptive Dropout and Regularization »
Daniel LeJeune · Hamid Javadi · Richard Baraniuk -
2020 : Opening Remarks »
Reinhard Heckel · Paul Hand · Soheil Feizi · Lenka Zdeborová · Richard Baraniuk -
2020 Workshop: Workshop on Deep Learning and Inverse Problems »
Reinhard Heckel · Paul Hand · Richard Baraniuk · Lenka Zdeborová · Soheil Feizi -
2020 Poster: Analytical Probability Distributions and Exact Expectation-Maximization for Deep Generative Networks »
Randall Balestriero · Sebastien PARIS · Richard Baraniuk -
2020 Poster: MomentumRNN: Integrating Momentum into Recurrent Neural Networks »
Tan Nguyen · Richard Baraniuk · Andrea Bertozzi · Stanley Osher · Bao Wang -
2019 : Poster Session »
Pravish Sainath · Mohamed Akrout · Charles Delahunt · Nathan Kutz · Guangyu Robert Yang · Joseph Marino · L F Abbott · Nicolas Vecoven · Damien Ernst · andrew warrington · Michael Kagan · Kyunghyun Cho · Kameron Harris · Leopold Grinberg · John J. Hopfield · Dmitry Krotov · Taliah Muhammad · Erick Cobos · Edgar Walker · Jacob Reimer · Andreas Tolias · Alexander Ecker · Janaki Sheth · Yu Zhang · Maciej Wołczyk · Jacek Tabor · Szymon Maszke · Roman Pogodin · Dane Corneil · Wulfram Gerstner · Baihan Lin · Guillermo Cecchi · Jenna M Reinen · Irina Rish · Guillaume Bellec · Darjan Salaj · Anand Subramoney · Wolfgang Maass · Yueqi Wang · Ari Pakman · Jin Hyung Lee · Liam Paninski · Bryan Tripp · Colin Graber · Alex Schwing · Luke Prince · Gabriel Ocker · Michael Buice · Benjamin Lansdell · Konrad Kording · Jack Lindsey · Terrence Sejnowski · Matthew Farrell · Eric Shea-Brown · Nicolas Farrugia · Victor Nepveu · Jiwoong Im · Kristin Branson · Brian Hu · Ramakrishnan Iyer · Stefan Mihalas · Sneha Aenugu · Hananel Hazan · Sihui Dai · Tan Nguyen · Doris Tsao · Richard Baraniuk · Anima Anandkumar · Hidenori Tanaka · Aran Nayebi · Stephen Baccus · Surya Ganguli · Dean Pospisil · Eilif Muller · Jeffrey S Cheng · Gaël Varoquaux · Kamalaker Dadi · Dimitrios C Gklezakos · Rajesh PN Rao · Anand Louis · Christos Papadimitriou · Santosh Vempala · Naganand Yadati · Daniel Zdeblick · Daniela M Witten · Nicholas Roberts · Vinay Prabhu · Pierre Bellec · Poornima Ramesh · Jakob H Macke · Santiago Cadena · Guillaume Bellec · Franz Scherr · Owen Marschall · Robert Kim · Hannes Rapp · Marcio Fonseca · Oliver Armitage · Jiwoong Im · Thomas Hardcastle · Abhishek Sharma · Wyeth Bair · Adrian Valente · Shane Shang · Merav Stern · Rutuja Patil · Peter Wang · Sruthi Gorantla · Peter Stratton · Tristan Edwards · Jialin Lu · Martin Ester · Yurii Vlasov · Siavash Golkar -
2019 : Opening Remarks »
Reinhard Heckel · Paul Hand · Alex Dimakis · Joan Bruna · Deanna Needell · Richard Baraniuk -
2019 Workshop: Solving inverse problems with deep networks: New architectures, theoretical foundations, and applications »
Reinhard Heckel · Paul Hand · Richard Baraniuk · Joan Bruna · Alex Dimakis · Deanna Needell -
2019 Poster: The Geometry of Deep Networks: Power Diagram Subdivision »
Randall Balestriero · Romain Cosentino · Behnaam Aazhang · Richard Baraniuk -
2018 Workshop: Integration of Deep Learning Theories »
Richard Baraniuk · Anima Anandkumar · Stephane Mallat · Ankit Patel · nhật Hồ -
2018 : Panel Discussion »
Richard Baraniuk · Maarten V. de Hoop · Paul A Johnson -
2018 : Introduction »
Laura Pyrak-Nolte · James Rustad · Richard Baraniuk -
2018 Workshop: Machine Learning for Geophysical & Geochemical Signals »
Laura Pyrak-Nolte · James Rustad · Richard Baraniuk -
2017 : Opening Remarks »
Gautam Dasarathy -
2017 Poster: The Expxorcist: Nonparametric Graphical Models Via Conditional Exponential Densities »
Arun Suggala · Mladen Kolar · Pradeep Ravikumar -
2017 Poster: Learned D-AMP: Principled Neural Network based Compressive Image Recovery »
Chris Metzler · Ali Mousavi · Richard Baraniuk -
2016 : Mladen Kolar. Post-Regularization Inference for Dynamic Nonparanormal Graphical Models. »
Mladen Kolar -
2016 Workshop: Machine Learning for Education »
Richard Baraniuk · Jiquan Ngiam · Christoph Studer · Phillip Grimaldi · Andrew Lan -
2016 Poster: A Probabilistic Framework for Deep Learning »
Ankit Patel · Tan Nguyen · Richard Baraniuk -
2016 Poster: The Multi-fidelity Multi-armed Bandit »
Kirthevasan Kandasamy · Gautam Dasarathy · Barnabas Poczos · Jeff Schneider -
2016 Poster: Statistical Inference for Pairwise Graphical Models Using Score Matching »
Ming Yu · Mladen Kolar · Varun Gupta -
2016 Poster: Gaussian Process Bandit Optimisation with Multi-fidelity Evaluations »
Kirthevasan Kandasamy · Gautam Dasarathy · Junier B Oliva · Jeff Schneider · Barnabas Poczos -
2015 : Low-dimensional inference with high-dimensional data »
Richard Baraniuk -
2015 : Probabilistic Theory of Deep Learning »
Richard Baraniuk -
2014 Workshop: Human Propelled Machine Learning »
Richard Baraniuk · Michael Mozer · Divyanshu Vats · Christoph Studer · Andrew E Waters · Andrew Lan -
2014 Workshop: Modern Nonparametrics 3: Automating the Learning Pipeline »
Eric Xing · Mladen Kolar · Arthur Gretton · Samory Kpotufe · Han Liu · Zoltán Szabó · Alan Yuille · Andrew G Wilson · Ryan Tibshirani · Sasha Rakhlin · Damian Kozbur · Bharath Sriperumbudur · David Lopez-Paz · Kirthevasan Kandasamy · Francesco Orabona · Andreas Damianou · Wacha Bounliphone · Yanshuai Cao · Arijit Das · Yingzhen Yang · Giulia DeSalvo · Dmitry Storcheus · Roberto Valerio -
2013 Workshop: Modern Nonparametric Methods in Machine Learning »
Arthur Gretton · Mladen Kolar · Samory Kpotufe · John Lafferty · Han Liu · Bernhard Schölkopf · Alexander Smola · Rob Nowak · Mikhail Belkin · Lorenzo Rosasco · peter bickel · Yue Zhao -
2013 Poster: When in Doubt, SWAP: High-Dimensional Sparse Recovery from Correlated Measurements »
Divyanshu Vats · Richard Baraniuk -
2012 Workshop: Modern Nonparametric Methods in Machine Learning »
Sivaraman Balakrishnan · Arthur Gretton · Mladen Kolar · John Lafferty · Han Liu · Tong Zhang -
2011 Poster: Minimax Localization of Structural Information in Large Noisy Matrices »
Mladen Kolar · Sivaraman Balakrishnan · Alessandro Rinaldo · Aarti Singh -
2011 Spotlight: Minimax Localization of Structural Information in Large Noisy Matrices »
Mladen Kolar · Sivaraman Balakrishnan · Alessandro Rinaldo · Aarti Singh -
2011 Poster: SpaRCS: Recovering low-rank and sparse matrices from compressive measurements »
Andrew E Waters · Aswin C Sankaranarayanan · Richard Baraniuk -
2009 Workshop: Manifolds, sparsity, and structured models: When can low-dimensional geometry really help? »
Richard Baraniuk · Volkan Cevher · Mark A Davenport · Piotr Indyk · Bruno Olshausen · Michael B Wakin -
2009 Poster: Time-Varying Dynamic Bayesian Networks »
Le Song · Mladen Kolar · Eric Xing -
2009 Spotlight: Time-Varying Dynamic Bayesian Networks »
Le Song · Mladen Kolar · Eric Xing -
2009 Poster: Sparsistent Learning of Varying-coefficient Models with Structural Changes »
Mladen Kolar · Le Song · Eric Xing -
2009 Spotlight: Sparsistent Learning of Varying-coefficient Models with Structural Changes »
Mladen Kolar · Le Song · Eric Xing -
2008 Poster: Sparse Signal Recovery Using Markov Random Fields »
Volkan Cevher · Marco F Duarte · Chinmay Hegde · Richard Baraniuk -
2008 Spotlight: Sparse Signal Recovery Using Markov Random Fields »
Volkan Cevher · Marco F Duarte · Chinmay Hegde · Richard Baraniuk -
2007 Poster: Random Projections for Manifold Learning »
Chinmay Hegde · Richard Baraniuk