Timezone: »
To better conform to data geometry, recent deep generative modelling techniques adapt Euclidean constructions to non-Euclidean spaces. In this paper, we study normalizing flows on manifolds. Previous work has developed flow models for specific cases; however, these advancements hand craft layers on a manifold-by-manifold basis, restricting generality and inducing cumbersome design constraints. We overcome these issues by introducing Neural Manifold Ordinary Differential Equations, a manifold generalization of Neural ODEs, which enables the construction of Manifold Continuous Normalizing Flows (MCNFs). MCNFs require only local geometry (therefore generalizing to arbitrary manifolds) and compute probabilities with continuous change of variables (allowing for a simple and expressive flow construction). We find that leveraging continuous manifold dynamics produces a marked improvement for both density estimation and downstream tasks.
Author Information
Aaron Lou (Cornell University)
Derek Lim (Cornell University)
Isay Katsman (Cornell University)
Leo Huang (Cornell University)
Qingxuan Jiang (Cornell University)
Ser Nam Lim (Facebook AI)
Christopher De Sa (Cornell)
More from the Same Authors
-
2021 : Mix-MaxEnt: Improving Accuracy and Uncertainty Estimates of Deterministic Neural Networks »
Francesco Pinto · Harry Yang · Ser Nam Lim · Philip Torr · Puneet Dokania -
2023 Poster: Riemannian Residual Neural Networks »
Isay Katsman · Eric M Chen · Sidhanth Holalkere · Anna Asch · Aaron Lou · Ser Nam Lim · Christopher De Sa -
2023 Poster: Test-Time Distribution Normalization for Contrastively Learned Visual-language Models »
Yifei Zhou · Juntao Ren · Fengyu Li · Ramin Zabih · Ser Nam Lim -
2023 Poster: Video Dynamics Prior: An Internal Learning Approach for Robust Video Enhancements »
Gaurav Shrivastava · Ser Nam Lim · Abhinav Shrivastava -
2022 Poster: Using Mixup as a Regularizer Can Surprisingly Improve Accuracy & Out-of-Distribution Robustness »
Francesco Pinto · Harry Yang · Ser Nam Lim · Philip Torr · Puneet Dokania -
2022 Poster: Spartan: Differentiable Sparsity via Regularized Transportation »
Kai Sheng Tai · Taipeng Tian · Ser Nam Lim -
2022 Poster: FedSR: A Simple and Effective Domain Generalization Method for Federated Learning »
A. Tuan Nguyen · Philip Torr · Ser Nam Lim -
2022 Poster: GAPX: Generalized Autoregressive Paraphrase-Identification X »
Yifei Zhou · Renyu Li · Hayden Housen · Ser Nam Lim -
2022 Poster: Few-Shot Fast-Adaptive Anomaly Detection »
Ze Wang · Yipin Zhou · Rui Wang · Tsung-Yu Lin · Ashish Shah · Ser Nam Lim -
2022 Poster: HorNet: Efficient High-Order Spatial Interactions with Recursive Gated Convolutions »
Yongming Rao · Wenliang Zhao · Yansong Tang · Jie Zhou · Ser Nam Lim · Jiwen Lu -
2021 Poster: Learning to Ground Multi-Agent Communication with Autoencoders »
Toru Lin · Jacob Huh · Christopher Stauffer · Ser Nam Lim · Phillip Isola -
2021 Poster: Large Scale Learning on Non-Homophilous Graphs: New Benchmarks and Strong Simple Methods »
Derek Lim · Felix Hohne · Xiuyu Li · Sijia Linda Huang · Vaishnavi Gupta · Omkar Bhalerao · Ser Nam Lim -
2021 Poster: Representing Hyperbolic Space Accurately using Multi-Component Floats »
Tao Yu · Christopher De Sa -
2021 Poster: NeRV: Neural Representations for Videos »
Hao Chen · Bo He · Hanyu Wang · Yixuan Ren · Ser Nam Lim · Abhinav Shrivastava -
2021 Poster: Hyperparameter Optimization Is Deceiving Us, and How to Stop It »
A. Feder Cooper · Yucheng Lu · Jessica Forde · Christopher De Sa -
2021 Poster: Intrinsic Dimension, Persistent Homology and Generalization in Neural Networks »
Tolga Birdal · Aaron Lou · Leonidas Guibas · Umut Simsekli -
2021 Poster: Equivariant Manifold Flows »
Isay Katsman · Aaron Lou · Derek Lim · Qingxuan Jiang · Ser Nam Lim · Christopher De Sa -
2021 Poster: A Continuous Mapping For Augmentation Design »
Keyu Tian · Chen Lin · Ser Nam Lim · Wanli Ouyang · Puneet Dokania · Philip Torr -
2021 Poster: Scaling Gaussian Processes with Derivative Information Using Variational Inference »
Misha Padidar · Xinran Zhu · Leo Huang · Jacob Gardner · David Bindel -
2020 : Deep Riemannian Manifold Learning »
Aaron Lou · Maximilian Nickel · Brandon Amos -
2020 Workshop: Differential Geometry meets Deep Learning (DiffGeo4DL) »
Joey Bose · Emile Mathieu · Charline Le Lan · Ines Chami · Frederic Sala · Christopher De Sa · Maximilian Nickel · Christopher RĂ© · Will Hamilton -
2020 Poster: Better Set Representations For Relational Reasoning »
Qian Huang · Horace He · Abhay Singh · Yan Zhang · Ser Nam Lim · Austin Benson -
2020 Poster: Random Reshuffling is Not Always Better »
Christopher De Sa -
2020 Poster: Asymptotically Optimal Exact Minibatch Metropolis-Hastings »
Ruqi Zhang · A. Feder Cooper · Christopher De Sa -
2020 Spotlight: Asymptotically Optimal Exact Minibatch Metropolis-Hastings »
Ruqi Zhang · A. Feder Cooper · Christopher De Sa -
2020 Spotlight: Random Reshuffling is Not Always Better »
Christopher De Sa -
2019 Poster: Numerically Accurate Hyperbolic Embeddings Using Tiling-Based Models »
Tao Yu · Christopher De Sa -
2019 Spotlight: Numerically Accurate Hyperbolic Embeddings Using Tiling-Based Models »
Tao Yu · Christopher De Sa -
2019 Poster: Dimension-Free Bounds for Low-Precision Training »
Zheng Li · Christopher De Sa -
2019 Poster: Poisson-Minibatching for Gibbs Sampling with Convergence Rate Guarantees »
Ruqi Zhang · Christopher De Sa -
2019 Spotlight: Poisson-Minibatching for Gibbs Sampling with Convergence Rate Guarantees »
Ruqi Zhang · Christopher De Sa -
2019 Poster: Channel Gating Neural Networks »
Weizhe Hua · Yuan Zhou · Christopher De Sa · Zhiru Zhang · G. Edward Suh