Timezone: »

 
Tutorial
Pay Attention to What You Need: Do Structural Priors Still Matter in the Age of Billion Parameter Models?
Irina Higgins · Antonia Creswell · Sébastien Racanière

Mon Dec 06 01:00 AM -- 04:30 AM (PST) @

The last few years have seen the emergence of billion parameter models trained on 'infinite' data that achieve impressive performance on many tasks, suggesting that big data and big models may be all we need. But how far can this approach take us, in particular on domains where data is more limited? In many situations adding structured architectural priors to models may be key to achieving faster learning, better generalisation and learning from less data. Structure can be added at the level of perception and at the level of reasoning - the goal of GOFAI research. In this tutorial we will use the idea of symmetries and symbolic reasoning as an overarching theoretical framework to describe many of the common structural priors that have been successful in the past for building more data efficient and generalisable perceptual models, and models that support better reasoning in neuro-symbolic approaches.

Author Information

Irina Higgins (DeepMind)

Irina Higgins is a Staff Research Scientist at DeepMind, where she works in the Froniers team. Her work aims to bring together insights from the fields of neuroscience and physics to advance general artificial intelligence through improved representation learning. Before joining DeepMind, Irina was a British Psychological Society Undergraduate Award winner for her achievements as an undergraduate student in Experimental Psychology at Westminster University, followed by a DPhil at the Oxford Centre for Computational Neuroscience and Artificial Intelligence, where she focused on understanding the computational principles underlying speech processing in the auditory brain. During her DPhil, Irina also worked on developing poker AI, applying machine learning in the finance sector, and working on speech recognition at Google Research.

Antonia Creswell (Imperial College London)

Antonia Creswell is a Senior Research Scientist at DeepMind in the Cognition team. Her work focuses on the learning and integration of object representations in dynamic models. She completed her PhD on representation learning at Imperial College London in the department of Bioengineering.

Sébastien Racanière (DeepMind)

Sébastien Racanière is a Staff Research Engineer in DeepMind. His current interests in ML revolve around the interaction between Physics and Machine Learning, with an emphasis on the use of symmetries. He got his PhD in pure mathematics from the Université Louis Pasteur, Strasbourg, in 2002, with co-supervisors Michèle Audin (Strasbourg) and Frances Kirwan (Oxford). This was followed by a two years Marie-Curie Individual Fellowship in Imperial College, London, and another postdoc in Cambridge (UK). His first job in the industry was at the Samsung European Research Institute, investigating the use of Learning Algorithms in mobile phones, followed by UGS, a Cambridge based company, working on a 3D search engine. He afterwards worked for Maxeler, in London, programming FPGAs. He then moved to Google, and finally DeepMind.

More from the Same Authors