Timezone: »
Talk: Disentanglement for Controllable Image Generation
Abstract: When it comes to generating diverse and plausible complex visual scenes from interpretable interfaces using deep learning, unsupervised disentangled representation learning can be very helpful. These methods can automatically discover the semantically meaningful attributes of a dataset, and represent them in a human-interpretable low-dimensional representation which can be manipulated to generate a large range of new plausible visual scenes. Disentangled representations are also conducive to semantic analogy making and sample efficient language grounding, which allows diverse language-controlled image manipulation and rendering. In this talk we will cover the strengths and limitations of the current methods for disentangled representation learning, and touch on the frontiers of this line of research where radically new approaches are starting to emerge based on the causal, physics-inspired, geometric and contrastive frameworks.
Bio: Irina is a Staff Research Scientist at DeepMind, where she works in the Froniers team. Her work aims to bring together insights from the fields of neuroscience and physics to advance general artificial intelligence through improved representation learning. Before joining DeepMind, Irina was a British Psychological Society Undergraduate Award winner for her achievements as an undergraduate student in Experimental Psychology at Westminster University, followed by a DPhil at the Oxford Center for Computational Neuroscience and Artificial Intelligence, where she focused on understanding the computational principles underlying speech processing in the auditory brain. During her DPhil, Irina also worked on developing poker AI, applying machine learning in the finance sector, and working on speech recognition at Google Research.
Author Information
Irina Higgins (DeepMind)
More from the Same Authors
-
2021 : Which priors matter? Benchmarking models for learning latent dynamics »
Aleksandar Botev · Andrew Jaegle · Peter Wirnsberger · Daniel Hennes · Irina Higgins -
2022 : Solving Math Word Problems with Process-based and Outcome-based Feedback »
Jonathan Uesato · Nate Kushman · Ramana Kumar · H. Francis Song · Noah Siegel · Lisa Wang · Antonia Creswell · Geoffrey Irving · Irina Higgins -
2022 : Solving Math Word Problems with Process-based and Outcome-based Feedback »
Jonathan Uesato · Nate Kushman · Ramana Kumar · H. Francis Song · Noah Siegel · Lisa Wang · Antonia Creswell · Geoffrey Irving · Irina Higgins -
2022 : Panel Discussion I: Geometric and topological principles for representation learning in ML »
Irina Higgins · Taco Cohen · Erik Bekkers · Nina Miolane · Rose Yu -
2022 : Symmetry-Based Representations for Artificial and Biological Intelligence »
Irina Higgins -
2022 Workshop: Information-Theoretic Principles in Cognitive Systems »
Noga Zaslavsky · Mycal Tucker · Sarah Marzen · Irina Higgins · Stephanie Palmer · Samuel J Gershman -
2021 Poster: SyMetric: Measuring the Quality of Learnt Hamiltonian Dynamics Inferred from Vision »
Irina Higgins · Peter Wirnsberger · Andrew Jaegle · Aleksandar Botev -
2021 Tutorial: Pay Attention to What You Need: Do Structural Priors Still Matter in the Age of Billion Parameter Models? »
Irina Higgins · Antonia Creswell · Sébastien Racanière -
2021 : Why do we Need Structure and Where does it Come From? »
Irina Higgins -
2020 : Invited Talk: Irina Higgins »
Irina Higgins -
2020 : Panel Discussion »
Jessica Hamrick · Klaus Greff · Michelle A. Lee · Irina Higgins · Josh Tenenbaum -
2020 Poster: Disentangling by Subspace Diffusion »
David Pfau · Irina Higgins · Alex Botev · Sébastien Racanière -
2019 : Panel Discussion: What sorts of cognitive or biological (architectural) inductive biases will be crucial for developing effective artificial intelligence? »
Irina Higgins · Talia Konkle · Matthias Bethge · Nikolaus Kriegeskorte -
2019 : What is disentangling and does intelligence need it? »
Irina Higgins -
2018 : Invited Talk 3 »
Irina Higgins -
2018 Poster: Life-Long Disentangled Representation Learning with Cross-Domain Latent Homologies »
Alessandro Achille · Tom Eccles · Loic Matthey · Chris Burgess · Nicholas Watters · Alexander Lerchner · Irina Higgins -
2018 Spotlight: Life-Long Disentangled Representation Learning with Cross-Domain Latent Homologies »
Alessandro Achille · Tom Eccles · Loic Matthey · Chris Burgess · Nicholas Watters · Alexander Lerchner · Irina Higgins -
2017 : Irina Higgins »
Irina Higgins