Timezone: »
Integrating functions on discrete domains into neural networks is key to developing their capability to reason about discrete objects. But, discrete domains are (1) not naturally amenable to gradient-based optimization, and (2) incompatible with deep learning architectures that rely on representations in high-dimensional vector spaces. In this work, we address both difficulties for set functions, which capture many important discrete problems. First, we develop a framework for extending set functions onto low-dimensional continuous domains, where many extensions are naturally defined. Our framework subsumes many well-known extensions as special cases. Second, to avoid undesirable low-dimensional neural network bottlenecks, we convert low-dimensional extensions into representations in high-dimensional spaces, taking inspiration from the success of semidefinite programs for combinatorial optimization. Empirically, we observe benefits of our extensions for unsupervised neural combinatorial optimization, in particular with high-dimensional representations.
Author Information
Nikolaos Karalias (EPFL)
Joshua Robinson (MIT)
Andreas Loukas (Prescient Design, gRED, Roche)
Stefanie Jegelka (MIT)
More from the Same Authors
-
2022 : A Pareto-optimal compositional energy-based model for sampling and optimization of protein sequences »
Nataša Tagasovska · Nathan Frey · Andreas Loukas · Isidro Hotzel · Julien Lafrance-Vanasse · Ryan Kelly · Yan Wu · Arvind Rajpal · Richard Bonneau · Kyunghyun Cho · Stephen Ra · Vladimir Gligorijevic -
2023 Poster: Limits, approximation and size transferability for GNNs on sparse graphs via graphops »
Thien Le · Stefanie Jegelka -
2023 Poster: Expressive Sign Equivariant Networks for Spectral Geometric Learning »
Derek Lim · Joshua Robinson · Stefanie Jegelka · Haggai Maron -
2023 Poster: AbDiffuser: full-atom generation of in-vitro functioning antibodies »
Karolis Martinkus · Jan Ludwiczak · WEI-CHING LIANG · Julien Lafrance-Vanasse · Isidro Hotzel · Arvind Rajpal · Yan Wu · Kyunghyun Cho · Richard Bonneau · Vladimir Gligorijevic · Andreas Loukas -
2023 Poster: The Exact Sample Complexity Gain from Invariances for Kernel Regression »
Behrooz Tahmasebi · Stefanie Jegelka -
2023 Poster: What is the Inductive Bias of Flatness Regularization? A Study of Deep Matrix Factorization Models »
Khashayar Gatmiry · Zhiyuan Li · Tengyu Ma · Sashank Reddi · Stefanie Jegelka · Ching-Yao Chuang -
2023 Workshop: Heavy Tails in ML: Structure, Stability, Dynamics »
Mert Gurbuzbalaban · Stefanie Jegelka · Michael Mahoney · Umut Simsekli -
2022 : Panel »
Roman Garnett · José Miguel Hernández-Lobato · Eytan Bakshy · Syrine Belakaria · Stefanie Jegelka -
2022 Workshop: New Frontiers in Graph Learning »
Jiaxuan You · Marinka Zitnik · Rex Ying · Yizhou Sun · Hanjun Dai · Stefanie Jegelka -
2022 Poster: Tree Mover's Distance: Bridging Graph Metrics and Stability of Graph Neural Networks »
Ching-Yao Chuang · Stefanie Jegelka -
2022 Poster: On the generalization of learning algorithms that do not converge »
Nisha Chandramoorthy · Andreas Loukas · Khashayar Gatmiry · Stefanie Jegelka -
2021 Poster: Can contrastive learning avoid shortcut solutions? »
Joshua Robinson · Li Sun · Ke Yu · Kayhan Batmanghelich · Stefanie Jegelka · Suvrit Sra -
2021 Poster: What training reveals about neural network complexity »
Andreas Loukas · Marinos Poiitis · Stefanie Jegelka -
2021 Poster: SQALER: Scaling Question Answering by Decoupling Multi-Hop and Logical Reasoning »
Mattia Atzeni · Jasmina Bogojeska · Andreas Loukas -
2021 Poster: Partition and Code: learning how to compress graphs »
Giorgos Bouritsas · Andreas Loukas · Nikolaos Karalias · Michael Bronstein -
2020 Poster: Adaptive Sampling for Stochastic Risk-Averse Learning »
Sebastian Curi · Kfir Y. Levy · Stefanie Jegelka · Andreas Krause -
2020 Poster: How hard is to distinguish graphs with graph neural networks? »
Andreas Loukas -
2020 Poster: Building powerful and equivariant graph neural networks with structural message-passing »
Clément Vignac · Andreas Loukas · Pascal Frossard -
2020 Poster: Erdos Goes Neural: an Unsupervised Learning Framework for Combinatorial Optimization on Graphs »
Nikolaos Karalias · Andreas Loukas -
2020 Oral: Erdos Goes Neural: an Unsupervised Learning Framework for Combinatorial Optimization on Graphs »
Nikolaos Karalias · Andreas Loukas -
2020 Poster: Debiased Contrastive Learning »
Ching-Yao Chuang · Joshua Robinson · Yen-Chen Lin · Antonio Torralba · Stefanie Jegelka -
2020 Spotlight: Debiased Contrastive Learning »
Ching-Yao Chuang · Joshua Robinson · Yen-Chen Lin · Antonio Torralba · Stefanie Jegelka -
2019 Workshop: Graph Representation Learning »
Will Hamilton · Rianne van den Berg · Michael Bronstein · Stefanie Jegelka · Thomas Kipf · Jure Leskovec · Renjie Liao · Yizhou Sun · Petar Veličković -
2019 Poster: Flexible Modeling of Diversity with Strongly Log-Concave Distributions »
Joshua Robinson · Suvrit Sra · Stefanie Jegelka