Timezone: »

 
Workshop
Reasoning, Attention, Memory (RAM) Workshop
Jason E Weston · Sumit Chopra · Antoine Bordes

Sat Dec 12 05:30 AM -- 03:30 PM (PST) @ 510 ac
Event URL: http://www.jaseweston.com/ram »

Motivation and Objective of the Workshop

In order to solve AI, a key component is the use of long term dependencies as well as short term context during inference, i.e., the interplay of reasoning, attention and memory. The machine learning community has had great success in the last decades at solving basic prediction tasks such as text classification, image annotation and speech recognition. However, solutions to deeper reasoning tasks have remained elusive. Until recently, most existing machine learning models have lacked an easy way to read and write to part of a (potentially very large) long-term memory component, and to combine this seamlessly with inference. To combine memory with reasoning, a model must learn how to access it, i.e. to perform attention over its memory. Within the last year or so, in part inspired by some earlier works [8, 9, 14, 15, 16, 18, 19], there has been some notable progress in these areas which this workshop addresses. Models developing notions of attention [12, 5, 6, 7, 20, 21] have shown positive results on a number of real-world tasks such as machine translation and image captioning. There has also been a surge in building models of computation which explore differing forms of explicit storage [1, 10, 11, 13, 17]. For example, recently it was shown how to learn a model to sort a small set of numbers [1] as well as a host of other symbolic manipulation tasks. Another promising direction is work employing a large long-term memory for reading comprehension; the capability of somewhat deeper reasoning has been shown on synthetic data [2], and promising results are starting to appear on real data [3,4].

In spite of this resurgence, the research into developing learning algorithms combining these components and the analysis of those algorithms is still in its infancy. The purpose of this workshop is to bring together researchers from diverse backgrounds to exchange ideas which could lead to addressing the various drawbacks associated with such models leading to more interesting models in the quest for moving towards true AI. We thus plan to focus on addressing the following issues:

* How to decide what to write and what not to write in the memory.
* How to represent knowledge to be stored in memories.
* Types of memory (arrays, stacks, or stored within weights of model), when they should be used, and how can they be learnt.
* How to do fast retrieval of relevant knowledge from memories when the scale is huge.
* How to build hierarchical memories, e.g. employing multiscale notions of attention.
* How to build hierarchical reasoning, e.g. via composition of functions.
* How to incorporate forgetting/compression of information which is not important.
* How to properly evaluate reasoning models. Which tasks can have a proper coverage and also allow for unambiguous interpretation of systems' capabilities? Are artificial tasks a convenient way?
* Can we draw inspiration from how animal or human memories are stored and used?

The workshop will devote most of the time in invited speaker talks, contributed talks and panel discussion. In order to move away from a mini-conference effect we will not have any posters. To encourage interaction a webpage will be employed for realtime updates, also allowing people to post questions before or during the workshop, which will be asked at the end of talks or during the panel, or can be answered online.

Please see our external page for more information: http://www.jaseweston.com/ram

Author Information

Jason E Weston (Facebook AI Research)

Jason Weston received a PhD. (2000) from Royal Holloway, University of London under the supervision of Vladimir Vapnik. From 2000 to 2002, he was a researcher at Biowulf technologies, New York, applying machine learning to bioinformatics. From 2002 to 2003 he was a research scientist at the Max Planck Institute for Biological Cybernetics, Tuebingen, Germany. From 2004 to June 2009 he was a research staff member at NEC Labs America, Princeton. From July 2009 onwards he has been a research scientist at Google, New York. Jason Weston's current research focuses on various aspects of statistical machine learning and its applications, particularly in text and images.

Sumit Chopra (Facebook Inc)
Antoine Bordes (Facebook AI Research)

More from the Same Authors