Timezone: »
Ever since the 1950s AI scientists have been experimenting with the prospect of using computer technology to emulate human intelligence. While universal function approximation theorems promised success in this pursuit provided the kinds of tasks human intelligence was solving could be formulated as continuous function approximation problems, and provided enough scale was available to train MLPs of arbitrary width or depth, we find ourselves in the age of billion parameter models, and yet still far away from being able to replicate all aspects of human intelligence. Also, our models are not MLPs, but convolutional, recurrent, or otherwise structured neural networks. In this talk we will discuss why that is, and consider the general principles that can guide us towards building a new generation of neural networks with the kinds of structure that can solve the full spectrum of tasks that human intelligence can solve.
Author Information
Irina Higgins (DeepMind)
More from the Same Authors
-
2021 : Which priors matter? Benchmarking models for learning latent dynamics »
Aleksandar Botev · Andrew Jaegle · Peter Wirnsberger · Daniel Hennes · Irina Higgins -
2022 : Solving Math Word Problems with Process-based and Outcome-based Feedback »
Jonathan Uesato · Nate Kushman · Ramana Kumar · H. Francis Song · Noah Siegel · Lisa Wang · Antonia Creswell · Geoffrey Irving · Irina Higgins -
2022 : Solving Math Word Problems with Process-based and Outcome-based Feedback »
Jonathan Uesato · Nate Kushman · Ramana Kumar · H. Francis Song · Noah Siegel · Lisa Wang · Antonia Creswell · Geoffrey Irving · Irina Higgins -
2022 : Panel Discussion I: Geometric and topological principles for representation learning in ML »
Irina Higgins · Taco Cohen · Erik Bekkers · Nina Miolane · Rose Yu -
2022 : Symmetry-Based Representations for Artificial and Biological Intelligence »
Irina Higgins -
2022 Workshop: Information-Theoretic Principles in Cognitive Systems »
Noga Zaslavsky · Mycal Tucker · Sarah Marzen · Irina Higgins · Stephanie Palmer · Samuel J Gershman -
2021 : Invited Talk #3 - Disentanglement for Controllable Image Generation (Irina Higgins) »
Irina Higgins -
2021 Poster: SyMetric: Measuring the Quality of Learnt Hamiltonian Dynamics Inferred from Vision »
Irina Higgins · Peter Wirnsberger · Andrew Jaegle · Aleksandar Botev -
2021 Tutorial: Pay Attention to What You Need: Do Structural Priors Still Matter in the Age of Billion Parameter Models? »
Irina Higgins · Antonia Creswell · Sébastien Racanière -
2020 : Invited Talk: Irina Higgins »
Irina Higgins -
2020 : Panel Discussion »
Jessica Hamrick · Klaus Greff · Michelle A. Lee · Irina Higgins · Josh Tenenbaum -
2020 Poster: Disentangling by Subspace Diffusion »
David Pfau · Irina Higgins · Alex Botev · Sébastien Racanière -
2019 : Panel Discussion: What sorts of cognitive or biological (architectural) inductive biases will be crucial for developing effective artificial intelligence? »
Irina Higgins · Talia Konkle · Matthias Bethge · Nikolaus Kriegeskorte -
2019 : What is disentangling and does intelligence need it? »
Irina Higgins -
2018 : Invited Talk 3 »
Irina Higgins -
2018 Poster: Life-Long Disentangled Representation Learning with Cross-Domain Latent Homologies »
Alessandro Achille · Tom Eccles · Loic Matthey · Chris Burgess · Nicholas Watters · Alexander Lerchner · Irina Higgins -
2018 Spotlight: Life-Long Disentangled Representation Learning with Cross-Domain Latent Homologies »
Alessandro Achille · Tom Eccles · Loic Matthey · Chris Burgess · Nicholas Watters · Alexander Lerchner · Irina Higgins -
2017 : Irina Higgins »
Irina Higgins