Timezone: »
Many problems in machine learning (ML) can be cast as learning functions from sets to graphs, or more generally to hypergraphs; in short, Set2Graph functions. Examples include clustering, learning vertex and edge features on graphs, and learning features on triplets in a collection.
A natural approach for building Set2Graph models is to characterize all linear equivariant set-to-hypergraph layers and stack them with non-linear activations. This posses two challenges: (i) the expressive power of these networks is not well understood; and (ii) these models would suffer from high, often intractable computational and memory complexity, as their dimension grows exponentially.
This paper advocates a family of neural network models for learning Set2Graph functions that is both practical and of maximal expressive power (universal), that is, can approximate arbitrary continuous Set2Graph functions over compact sets. Testing these models on different machine learning tasks, mainly an application to particle physics, we find them favorable to existing baselines.
Author Information
Hadar Serviansky (ClearStructure)
Nimrod Segol (Weizmann Institute of Science)
Jonathan Shlomi (Weizmann Institute of Science)
Kyle Cranmer (New York University)
Kyle Cranmer is an Associate Professor of Physics at New York University and affiliated with NYU's Center for Data Science. He is an experimental particle physicists working, primarily, on the Large Hadron Collider, based in Geneva, Switzerland. He was awarded the Presidential Early Career Award for Science and Engineering in 2007 and the National Science Foundation's Career Award in 2009. Professor Cranmer developed a framework that enables collaborative statistical modeling, which was used extensively for the discovery of the Higgs boson in July, 2012. His current interests are at the intersection of physics and machine learning and include inference in the context of intractable likelihoods, development of machine learning models imbued with physics knowledge, adversarial training for robustness to systematic uncertainty, the use of generative models in the physical sciences, and integration of reproducible workflows in the inference pipeline.
Eilam Gross (Weizmann Institute of Science)
Haggai Maron (NVIDIA Research)
I am a PhD student at the Department of Computer Science and Applied Mathematics at the Weizmann Institute of Science under the supervision of Prof. Yaron Lipman. My main fields of interest are machine learning, optimization and shape analysis. More specifically I am working on applying deep learning to irregular domains (e.g., graphs, point clouds, and surfaces) and graph/shape matching problems. I serve as a reviewer for NeurIPS, ICCV, SIGGRAPH, SIGGRAPH Asia, ACM TOG, JAIR, TVCG and SGP.
Yaron Lipman (Weizmann Institute of Science)
More from the Same Authors
-
2021 : Characterizing γ-ray maps of the Galactic Center with neural density estimation »
Siddharth Mishra-Sharma · Kyle Cranmer -
2021 : The Quantum Trellis: A classical algorithm for sampling the parton shower with interference effects »
Sebastian Macaluso · Kyle Cranmer -
2022 : Computing the Bayes-optimal classifier and exact maximum likelihood estimator with a semi-realistic generative model for jet physics »
Kyle Cranmer · Matthew Drnevich · Lauren Greenspan · Sebastian Macaluso · Duccio Pappadopulo -
2022 : Generalized Laplacian Positional Encoding for Graph Representation Learning »
Sohir Maskey · Ali Parviz · Maximilian Thiessen · Hannes Stärk · Ylli Sadikaj · Haggai Maron -
2022 Workshop: Machine Learning and the Physical Sciences »
Atilim Gunes Baydin · Adji Bousso Dieng · Emine Kucukbenli · Gilles Louppe · Siddharth Mishra-Sharma · Benjamin Nachman · Brian Nord · Savannah Thais · Anima Anandkumar · Kyle Cranmer · Lenka Zdeborová · Rianne van den Berg -
2022 Poster: Understanding and Extending Subgraph GNNs by Rethinking Their Symmetries »
Fabrizio Frasca · Beatrice Bevilacqua · Michael Bronstein · Haggai Maron -
2022 Poster: VisCo Grids: Surface Reconstruction with Viscosity and Coarea Grids »
Albert Pumarola · Artsiom Sanakoyeu · Lior Yariv · Ali Thabet · Yaron Lipman -
2022 Poster: Neural Conservation Laws: A Divergence-Free Perspective »
Jack Richter-Powell · Yaron Lipman · Ricky T. Q. Chen -
2021 : Kyle Cranmer »
Kyle Cranmer -
2021 Workshop: Machine Learning and the Physical Sciences »
Anima Anandkumar · Kyle Cranmer · Mr. Prabhat · Lenka Zdeborová · Atilim Gunes Baydin · Juan Carrasquilla · Emine Kucukbenli · Gilles Louppe · Benjamin Nachman · Brian Nord · Savannah Thais -
2021 Oral: Moser Flow: Divergence-based Generative Modeling on Manifolds »
Noam Rozen · Aditya Grover · Maximilian Nickel · Yaron Lipman -
2021 Oral: Volume Rendering of Neural Implicit Surfaces »
Lior Yariv · Jiatao Gu · Yoni Kasten · Yaron Lipman -
2021 Poster: Moser Flow: Divergence-based Generative Modeling on Manifolds »
Noam Rozen · Aditya Grover · Maximilian Nickel · Yaron Lipman -
2021 Poster: Volume Rendering of Neural Implicit Surfaces »
Lior Yariv · Jiatao Gu · Yoni Kasten · Yaron Lipman -
2020 Workshop: Machine Learning and the Physical Sciences »
Anima Anandkumar · Kyle Cranmer · Shirley Ho · Mr. Prabhat · Lenka Zdeborová · Atilim Gunes Baydin · Juan Carrasquilla · Adji Bousso Dieng · Karthik Kashinath · Gilles Louppe · Brian Nord · Michela Paganini · Savannah Thais -
2020 Poster: Flows for simultaneous manifold learning and density estimation »
Johann Brehmer · Kyle Cranmer -
2020 Poster: Discovering Symbolic Models from Deep Learning with Inductive Biases »
Miles Cranmer · Alvaro Sanchez Gonzalez · Peter Battaglia · Rui Xu · Kyle Cranmer · David Spergel · Shirley Ho -
2020 Poster: Multiview Neural Surface Reconstruction by Disentangling Geometry and Appearance »
Lior Yariv · Yoni Kasten · Dror Moran · Meirav Galun · Matan Atzmon · Basri Ronen · Yaron Lipman -
2020 Spotlight: Multiview Neural Surface Reconstruction by Disentangling Geometry and Appearance »
Lior Yariv · Yoni Kasten · Dror Moran · Meirav Galun · Matan Atzmon · Basri Ronen · Yaron Lipman -
2019 : Opening Remarks »
Atilim Gunes Baydin · Juan Carrasquilla · Shirley Ho · Karthik Kashinath · Michela Paganini · Savannah Thais · Anima Anandkumar · Kyle Cranmer · Roger Melko · Mr. Prabhat · Frank Wood -
2019 Workshop: Machine Learning and the Physical Sciences »
Atilim Gunes Baydin · Juan Carrasquilla · Shirley Ho · Karthik Kashinath · Michela Paganini · Savannah Thais · Anima Anandkumar · Kyle Cranmer · Roger Melko · Mr. Prabhat · Frank Wood -
2019 : Open Challenges - Spotlight Presentations »
Francisco Sumba Toral · Haggai Maron · Arinbjörn Kolbeinsson -
2019 Poster: Efficient Probabilistic Inference in the Quest for Physics Beyond the Standard Model »
Atilim Gunes Baydin · Lei Shao · Wahid Bhimji · Lukas Heinrich · Saeid Naderiparizi · Andreas Munk · Jialin Liu · Bradley Gram-Hansen · Gilles Louppe · Lawrence Meadows · Philip Torr · Victor Lee · Kyle Cranmer · Mr. Prabhat · Frank Wood -
2019 Poster: Controlling Neural Level Sets »
Matan Atzmon · Niv Haim · Lior Yariv · Ofer Israelov · Haggai Maron · Yaron Lipman -
2019 Poster: Provably Powerful Graph Networks »
Haggai Maron · Heli Ben-Hamu · Hadar Serviansky · Yaron Lipman -
2018 Poster: (Probably) Concave Graph Matching »
Haggai Maron · Yaron Lipman -
2018 Spotlight: (Probably) Concave Graph Matching »
Haggai Maron · Yaron Lipman -
2017 : Panel session »
Iain Murray · Max Welling · Juan Carrasquilla · Anatole von Lilienfeld · Gilles Louppe · Kyle Cranmer -
2017 Workshop: Deep Learning for Physical Sciences »
Atilim Gunes Baydin · Mr. Prabhat · Kyle Cranmer · Frank Wood -
2017 Poster: Learning to Pivot with Adversarial Networks »
Gilles Louppe · Michael Kagan · Kyle Cranmer -
2016 Invited Talk: Machine Learning and Likelihood-Free Inference in Particle Physics »
Kyle Cranmer -
2015 : An alternative to ABC for likelihood-free inference »
Kyle Cranmer