Timezone: »
Scattering transforms are nontrainable deep convolutional architectures that exploit the multiscale resolution of a wavelet filter bank to obtain an appropriate representation of data. More importantly, they are proven invariant to translations, and stable to perturbations that are close to translations. This stability property dons the scattering transform with a robustness to small changes in the metric domain of the data. When considering network data, regular convolutions do not hold since the data domain presents an irregular structure given by the network topology. In this work, we extend scattering transforms to network data by using multiresolution graph wavelets, whose computation can be obtained by means of graph convolutions. Furthermore, we prove that the resulting graph scattering transforms are stable to metric perturbations of the underlying network. This renders graph scattering transforms robust to changes on the network topology, making it particularly useful for cases of transfer learning, topology estimation or timevarying graphs.
Author Information
Fernando Gama (University of Pennsylvania)
I am a Ph.D. candidate at the Electrical and Systems Engineering department of the University of Pennsylvania. My advisor is Prof. Alejandro Ribeiro. I received an Electronic Engineering degree from the School of Engineering of the University of Buenos Aires, Argentina in 2013, and a M. A. in Statistics from the Wharton School in 2017. I have been a visiting researcher at TU Delft in 2017, and a research intern at Facebook Artificial Intelligence Research in 2018. I was awarded with a Fulbright scholarship for international students for 20142016. My research interests currently lie on the field of machine learning for network data. More specifically, I am interested in developing collaborative intelligence. The fundamental objective is for a group of entities (modeled as nodes in a graph; could be team of autonomous agents, sensors in a network, sources in a power grid, vehicles in a transportation network) to learn, from data, how to collaboratively accomplish a certain task. The challenge is that the nodes have access only to partial, local information acquired through exchanges with neighboring nodes, but need to coordinate a global solution for the entire team. To tackle this problem, we have been developing tools within the context of graph neural networks (GNNs). We have been focusing on solutions that can be implemented locally on a given graph, exploiting the fact that nodes have computational capabilities. We have also obtained theoretical results on how the performance of GNNs change when the underlying graph changes. This allows to set limits on transfer learning, and timevarying graphs. Currently, we are researching applications to teams of autonomous agents, power grids and wireless networks.
Alejandro Ribeiro (University of Pennsylvania)
Joan Bruna (NYU)
More from the Same Authors

2021 Datasets and Benchmarks: Dataset and Benchmark Track 1 »
Joaquin Vanschoren · Serena Yeung · Maria Xenochristou 
2021 Poster: On the Sample Complexity of Learning under Geometric Stability »
Alberto Bietti · · Joan Bruna 
2021 Poster: On the Cryptographic Hardness of Learning Single Periodic Neurons »
Min Jae Song · Ilias Zadik · Joan Bruna 
2021 Poster: Adversarial Robustness with SemiInfinite Constrained Learning »
Alexander Robey · Luiz Chamon · George Pappas · Alejandro Ribeiro · Hamed Hassani 
2021 Poster: Offline RL Without OffPolicy Evaluation »
David Brandfonbrener · William Whitney · Rajesh Ranganath · Joan Bruna 
2021 Spotlight: Offline RL Without OffPolicy Evaluation »
David Brandfonbrener · William Whitney · Rajesh Ranganath · Joan Bruna 
2020 Poster: Sinkhorn Natural Gradient for Generative Models »
Zebang Shen · Zhenfu Wang · Alejandro Ribeiro · Hamed Hassani 
2020 Poster: Sinkhorn Barycenter via Functional Gradient Descent »
Zebang Shen · Zhenfu Wang · Alejandro Ribeiro · Hamed Hassani 
2020 Spotlight: Sinkhorn Natural Gradient for Generative Models »
Zebang Shen · Zhenfu Wang · Alejandro Ribeiro · Hamed Hassani 
2020 Poster: A meanfield analysis of twoplayer zerosum games »
Carles DomingoEnrich · Samy Jelassi · Arthur Mensch · Grant Rotskoff · Joan Bruna 
2020 Poster: Can Graph Neural Networks Count Substructures? »
Zhengdao Chen · Lei Chen · Soledad Villar · Joan Bruna 
2020 Poster: Graphon Neural Networks and the Transferability of Graph Neural Networks »
Luana Ruiz · Luiz Chamon · Alejandro Ribeiro 
2020 Session: Orals & Spotlights Track 26: Graph/Relational/Theory »
Joan Bruna · Cassio de Campos 
2020 Poster: IDEAL: Inexact DEcentralized Accelerated Augmented Lagrangian Method »
Yossi Arjevani · Joan Bruna · Bugra Can · Mert Gurbuzbalaban · Stefanie Jegelka · Hongzhou Lin 
2020 Spotlight: IDEAL: Inexact DEcentralized Accelerated Augmented Lagrangian Method »
Yossi Arjevani · Joan Bruna · Bugra Can · Mert Gurbuzbalaban · Stefanie Jegelka · Hongzhou Lin 
2020 Poster: A Dynamical Central Limit Theorem for Shallow Neural Networks »
Zhengdao Chen · Grant Rotskoff · Joan Bruna · Eric VandenEijnden 
2020 Poster: Probably Approximately Correct Constrained Learning »
Luiz Chamon · Alejandro Ribeiro 
2019 Workshop: Solving inverse problems with deep networks: New architectures, theoretical foundations, and applications »
Reinhard Heckel · Paul Hand · Richard Baraniuk · Joan Bruna · Alexandros Dimakis · Deanna Needell 
2019 Poster: Gradient Dynamics of Shallow Univariate ReLU Networks »
Francis Williams · Matthew Trager · Daniele Panozzo · Claudio Silva · Denis Zorin · Joan Bruna 
2019 Poster: On the Expressive Power of Deep Polynomial Neural Networks »
Joe Kileel · Matthew Trager · Joan Bruna 
2019 Poster: Finding the Needle in the Haystack with Convolutions: on the benefits of architectural bias »
Stéphane d'Ascoli · Levent Sagun · Giulio Biroli · Joan Bruna 
2019 Poster: On the equivalence between graph isomorphism testing and function approximation with GNNs »
Zhengdao Chen · Soledad Villar · Lei Chen · Joan Bruna 
2019 Poster: Constrained Reinforcement Learning Has Zero Duality Gap »
Santiago Paternain · Luiz Chamon · Miguel CalvoFullana · Alejandro Ribeiro 
2017 Poster: Approximate Supermodularity Bounds for Experimental Design »
Luiz Chamon · Alejandro Ribeiro 
2017 Poster: FirstOrder Adaptive Sample Size Methods to Reduce Complexity of Empirical Risk Minimization »
Aryan Mokhtari · Alejandro Ribeiro 
2017 Tutorial: Geometric Deep Learning on Graphs and Manifolds »
Michael Bronstein · Joan Bruna · arthur szlam · Xavier Bresson · Yann LeCun 
2016 Poster: Adaptive Newton Method for Empirical Risk Minimization to Statistical Accuracy »
Aryan Mokhtari · Hadi Daneshmand · Aurelien Lucchi · Thomas Hofmann · Alejandro Ribeiro 
2014 Poster: Exploiting Linear Structure Within Convolutional Networks for Efficient Evaluation »
Emily Denton · Wojciech Zaremba · Joan Bruna · Yann LeCun · Rob Fergus