In this tutorial, we will provide modern perspectives on abstraction and reasoning in AI systems. Traditionally, symbolic and probabilistic methods have dominated the domains of concept formation, abstraction, and automated reasoning. More recently, deep learning-based approaches have led to breakthroughs in some domains, like tackling hard search problems such as games and combinatorial search tasks. However, the resulting systems are still limited in scope and capabilities, especially in producing interpretable results and verifiable abstractions. Here, we will address a set of questions: Why is an ability for conceptual abstraction essential for intelligence, in both humans and machines? How can we get machines to learn flexible and extendable concepts that can transfer between domains? What do we understand by "strong reasoning capabilities" and how do we measure these capabilities in AI systems? How do deep learning-based methods change the landscape of computer-assisted reasoning? What are the failure modes of such methods and possible solutions to these issues?
Schedule 7:00pm - 7:40pm UTC Speaker: Francois Chollet Title: Why abstraction is the key, and what we're still missing
7:40pm - 7:50pm UTC Questions
7:50pm - 8:30pm UTC Speaker: Melanie Mitchell Title: Mechanisms of abstraction and analogy in natural and artificial intelligence
8:30pm - 8:40pm UTC Questions
8:40pm - 9:20pm UTC Speaker: Christian Szegedy Title: Deep learning for mathematical reasoning
9:20pm - 9:30pm UTC Questions
Francois Chollet (Google)
Francois Chollet is a software engineer at Google, where he leads the team that makes Keras, a major deep learning framework. He is the author of numerous publications in the field of deep learning, including a best-selling textbook. His current research focuses on abstraction generation, analogical reasoning, and how to achieve greater generality in artificial intelligence.
Melanie Mitchell (Santa Fe Institute)
Melanie Mitchell is the Davis Professor at the Santa Fe Institute. Her current research focuses on conceptual abstraction, analogy-making, and visual recognition in artificial intelligence systems. Melanie is the author or editor of six books and numerous scholarly papers in the fields of artificial intelligence, cognitive science, and complex systems. Her latest book is Artificial Intelligence: A Guide for Thinking Humans (Farrar, Straus, and Giroux).
Christian Szegedy (Google)
Christian Szegedy is a Machine Learning scientist at Google Research. He has a PhD in Mathematics from the University of Bonn, Germany. His most influential past works include the discovery of adversarial examples and various computer vision architectures for image recognition and object detection. He is the co-inventor of Batch-normalization. He is currently working on automated theorem proving and auto-formalization of mathematics via deep learning.
Related Events (a corresponding poster, oral, or spotlight)
2020 Tutorial: (Track1) Abstraction & Reasoning in AI systems: Modern Perspectives Q&A »
Wed. Dec 9th 10:00 -- 10:50 PM Room
More from the Same Authors
2021 Poster: A Little Robustness Goes a Long Way: Leveraging Robust Features for Targeted Transfer Attacks »
Jacob Springer · Melanie Mitchell · Garrett Kenyon
2021 Panel: The Consequences of Massive Scaling in Machine Learning »
Noah Goodman · Melanie Mitchell · Joelle Pineau · Oriol Vinyals · Jared Kaplan
2018 : Lunch provided and Open Source ML Systems Showcase (TensorFlow, PyTorch 1.0, MxNET, Keras, CoreML, Ray, Chainer) »
Rajat Monga · Soumith Chintala · Thierry Moreau · Francois Chollet · Daniel Crankshaw · Robert Nishihara · Seiya Tokui
2016 Poster: DeepMath - Deep Sequence Models for Premise Selection »
Geoffrey Irving · Christian Szegedy · Alexander Alemi · Niklas Een · Francois Chollet · Josef Urban
2013 Poster: Deep Neural Networks for Object Detection »
Christian Szegedy · Alexander Toshev · Dumitru Erhan