Skip to yearly menu bar Skip to main content


Keynote
in
Workshop: Gaze meets ML

Learning gaze control, external attention, and internal attention since 1990-91

Jürgen Schmidhuber


Abstract:

First I’ll discuss our early work of 1990 on attentive neural networks that learn to steer foveas, and on learning internal spotlights of attention in Transformer-like systems since 1991, then I’ll mention what happened in the subsequent 3 decades in terms of representing percepts and action plans in hierarchical neural networks, at multiple levels of abstraction, and multiple time scales. In preparation of this workshop, I made two overview web sites: 1. End-to-End Differentiable Sequential Neural Attention 1990-93 https://people.idsia.ch/~juergen/neural-attention-1990-1993.html. 2. Learning internal spotlights of attention with what’s now called "Transformers with linearized self-attention" which are formally equivalent to the 1991 Fast Weight Programmers: https://people.idsia.ch/~juergen/fast-weight-programmer-1991-transformer.html

Chat is not available.