Skip to yearly menu bar Skip to main content


( events)   Timezone:  
Workshop
Sat Dec 09 08:00 AM -- 06:30 PM (PST) @ 104 A
Cognitively Informed Artificial Intelligence: Insights From Natural Intelligence
Michael Mozer · Brenden Lake · Angela Yu





Workshop Home Page

The goal of this workshop is to bring together cognitive scientists, neuroscientists, and AI researchers to discuss opportunities for improving machine learning by leveraging our scientific understanding of human perception and cognition. There is a history of making these connections: artificial neural networks were originally motivated by the massively parallel, deep architecture of the brain; considerations of biological plausibility have driven the development of learning procedures; and architectures for computer vision draw parallels to the connectivity and physiology of mammalian visual cortex. However, beyond these celebrated examples, cognitive science and neuroscience has fallen short of its potential to influence the next generation of AI systems. Areas such as memory, attention, and development have rich theoretical and experimental histories, yet these concepts, as applied to AI systems so far, only bear a superficial resemblance to their biological counterparts.

The premise of this workshop is that there are valuable data and models from cognitive science that can inform the development of intelligent adaptive machines, and can endow learning architectures with the strength and flexibility of the human cognitive architecture. The structures and mechanisms of the mind and brain can provide the sort of strong inductive bias needed for machine-learning systems to attain human-like performance. We conjecture that this inductive bias will become more important as researchers move from domain-specific tasks such as object and speech recognition toward tackling general intelligence and the human-like ability to dynamically reconfigure cognition in service of changing goals. For ML researchers, the workshop will provide access to a wealth of data and concepts situated in the context of contemporary ML. For cognitive scientists, the workshop will suggest research questions that are of critical interest to ML researchers.

The workshop will focus on three interconnected topics of particular relevance to ML:

(1) Learning and development. Cognitive capabilities expressed early in a child’s development are likely to be crucial for bootstrapping adult learning and intelligence. Intuitive physics and intuitive psychology allow the developing organism to build an understanding of the world and of other agents. Additionally, children and adults often demonstrate “learning-to-learn,” where previous concepts and skills form a compositional basis for learning new concepts and skills.

(2) Memory. Human memory operates on multiple time scales, from memories that literally persist for the blink of an eye to those that persist for a lifetime. These different forms of memory serve different computational purposes. Although forgetting is typically thought of as a disadvantage, the ability to selectively forget/override irrelevant knowledge in nonstationary environments is highly desirable.

(3) Attention and Decision Making. These refer to relatively high-level cognitive functions that allow task demands to purposefully control an agent’s external environment and sensory data stream, dynamically reconfigure internal representation and architecture, and devise action plans that strategically trade off multiple, oft-conflicting behavioral objectives.

The long-term aims of this workshop are:

* to promote work that incorporates insights from human cognition to suggest novel and improved AI architectures;

* to facilitate the development of ML methods that can better predict human behavior; and

* to support the development of a field of ‘cognitive computing’ that is more than a marketing slogan一a field that improves on both natural and artificial cognition by synergistically advancing each and integrating their strengths in complementary manners.

Workshop overview (talk)
Cognitive AI (talk)
Computational modeling of human face processing (talk)
People infer object shape in a 3D, object-centered coordinate system (talk)
Relational neural expectation maximization (talk)
Contextual dependence of human preference for complex objects: A Bayesian statistical account (spotlight)
A biologically-inspired sparse, topographic recurrent neural network model for robust change detection (spotlight)
Visual attention guided deep imitation learning (spotlight)
Human learning of video games (spotlight)
COFFEE BREAK AND POSTER SESSION (break)
Life history and learning: Extended human childhood as a way to resolve explore/exploit trade-offs and improve hypothesis search (talk)
Meta-reinforcement learning in brains and machines (talk)
Revealing human inductive biases and metacognitive processes with rational models (talk)
Learning to select computations (talk)
From deep learning of disentangled representations to higher-level cognition (talk)
Access consciousness and the construction of actionable representations (talk)
Evaluating the capacity to reason about beliefs (talk)
COFFEE BREAK AND POSTER SESSION II (break)
Mapping the spatio-temporal dynamics of cognition in the human brain (talk)
Scale-invariant temporal memory in AI (talk)
Scale-invariant temporal history (SITH): Optimal slicing of the past in an uncertain world (talk)
Efficient human-like semantic representations via the information bottleneck principle (spotlight)
The mutation sampler: A sampling approach to causal representation (spotlight)
Generating more human-like recommendations with a cognitive model of generalization (spotlight)
POSTER: Using STDP for unsupervised, event-based online learning (poster)
POSTER: Learning to organize knowledge with N-gram machines (poster)
POSTER: Power-law temporal discounting over a logarithmically compressed timeline for scale invariant reinforcement learning (poster)
POSTER: Improving transfer using augmented feedback in progressive neural networks (poster)
POSTER: Variational probability flow for biologically plausible training of deep neural networks (poster)
POSTER: Sample-efficient reinforcement learning through transfer and architectural priors (poster)
POSTER: Curiosity-driven reinforcement learning with hoemostatic regulation (poster)
POSTER: Context-modulation of hippocampal dynamics and deep convolutional networks (poster)
POSTER: Cognitive modeling and the wisdom of the crowd (poster)
POSTER: Concept acquisition through meta-learning (poster)
POSTER: Pre-training attentional mechanisms (poster)
POSTER: Question asking as program generation (poster)
Object-oriented intelligence (talk)
Representational primitives, in minds and machines (talk)