NIPS 2015
Skip to yearly menu bar Skip to main content


Workshop

Reasoning, Attention, Memory (RAM) Workshop

Jason E Weston · Sumit Chopra · Antoine Bordes

510 ac

Motivation and Objective of the Workshop

In order to solve AI, a key component is the use of long term dependencies as well as short term context during inference, i.e., the interplay of reasoning, attention and memory. The machine learning community has had great success in the last decades at solving basic prediction tasks such as text classification, image annotation and speech recognition. However, solutions to deeper reasoning tasks have remained elusive. Until recently, most existing machine learning models have lacked an easy way to read and write to part of a (potentially very large) long-term memory component, and to combine this seamlessly with inference. To combine memory with reasoning, a model must learn how to access it, i.e. to perform attention over its memory. Within the last year or so, in part inspired by some earlier works [8, 9, 14, 15, 16, 18, 19], there has been some notable progress in these areas which this workshop addresses. Models developing notions of attention [12, 5, 6, 7, 20, 21] have shown positive results on a number of real-world tasks such as machine translation and image captioning. There has also been a surge in building models of computation which explore differing forms of explicit storage [1, 10, 11, 13, 17]. For example, recently it was shown how to learn a model to sort a small set of numbers [1] as well as a host of other symbolic manipulation tasks. Another promising direction is work employing a large long-term memory for reading comprehension; the capability of somewhat deeper reasoning has been shown on synthetic data [2], and promising results are starting to appear on real data [3,4].

In spite of this resurgence, the research into developing learning algorithms combining these components and the analysis of those algorithms is still in its infancy. The purpose of this workshop is to bring together researchers from diverse backgrounds to exchange ideas which could lead to addressing the various drawbacks associated with such models leading to more interesting models in the quest for moving towards true AI. We thus plan to focus on addressing the following issues:

* How to decide what to write and what not to write in the memory.
* How to represent knowledge to be stored in memories.
* Types of memory (arrays, stacks, or stored within weights of model), when they should be used, and how can they be learnt.
* How to do fast retrieval of relevant knowledge from memories when the scale is huge.
* How to build hierarchical memories, e.g. employing multiscale notions of attention.
* How to build hierarchical reasoning, e.g. via composition of functions.
* How to incorporate forgetting/compression of information which is not important.
* How to properly evaluate reasoning models. Which tasks can have a proper coverage and also allow for unambiguous interpretation of systems' capabilities? Are artificial tasks a convenient way?
* Can we draw inspiration from how animal or human memories are stored and used?

The workshop will devote most of the time in invited speaker talks, contributed talks and panel discussion. In order to move away from a mini-conference effect we will not have any posters. To encourage interaction a webpage will be employed for realtime updates, also allowing people to post questions before or during the workshop, which will be asked at the end of talks or during the panel, or can be answered online.

Please see our external page for more information: http://www.jaseweston.com/ram

Live content is unavailable. Log in and register to view live content