Motivation and Objective of the Workshop
In order to solve AI, a key component is the use of long term dependencies as well as short term context during inference, i.e., the interplay of reasoning, attention and memory. The machine learning community has had great success in the last decades at solving basic prediction tasks such as text classification, image annotation and speech recognition. However, solutions to deeper reasoning tasks have remained elusive. Until recently, most existing machine learning models have lacked an easy way to read and write to part of a (potentially very large) long-term memory component, and to combine this seamlessly with inference. To combine memory with reasoning, a model must learn how to access it, i.e. to perform attention over its memory. Within the last year or so, in part inspired by some earlier works [8, 9, 14, 15, 16, 18, 19], there has been some notable progress in these areas which this workshop addresses. Models developing notions of attention [12, 5, 6, 7, 20, 21] have shown positive results on a number of real-world tasks such as machine translation and image captioning. There has also been a surge in building models of computation which explore differing forms of explicit storage [1, 10, 11, 13, 17]. For example, recently it was shown how to learn a model to sort a small set of numbers  as well as a host of other symbolic manipulation tasks. Another promising direction is work employing a large long-term memory for reading comprehension; the capability of somewhat deeper reasoning has been shown on synthetic data , and promising results are starting to appear on real data [3,4].
In spite of this resurgence, the research into developing learning algorithms combining these components and the analysis of those algorithms is still in its infancy. The purpose of this workshop is to bring together researchers from diverse backgrounds to exchange ideas which could lead to addressing the various drawbacks associated with such models leading to more interesting models in the quest for moving towards true AI. We thus plan to focus on addressing the following issues:
* How to decide what to write and what not to write in the memory.
* How to represent knowledge to be stored in memories.
* Types of memory (arrays, stacks, or stored within weights of model), when they should be used, and how can they be learnt.
* How to do fast retrieval of relevant knowledge from memories when the scale is huge.
* How to build hierarchical memories, e.g. employing multiscale notions of attention.
* How to build hierarchical reasoning, e.g. via composition of functions.
* How to incorporate forgetting/compression of information which is not important.
* How to properly evaluate reasoning models. Which tasks can have a proper coverage and also allow for unambiguous interpretation of systems' capabilities? Are artificial tasks a convenient way?
* Can we draw inspiration from how animal or human memories are stored and used?
The workshop will devote most of the time in invited speaker talks, contributed talks and panel discussion. In order to move away from a mini-conference effect we will not have any posters. To encourage interaction a webpage will be employed for realtime updates, also allowing people to post questions before or during the workshop, which will be asked at the end of talks or during the panel, or can be answered online.
Please see our external page for more information: http://www.jaseweston.com/ram
Jason E Weston (Facebook AI Research)
Jason Weston received a PhD. (2000) from Royal Holloway, University of London under the supervision of Vladimir Vapnik. From 2000 to 2002, he was a researcher at Biowulf technologies, New York, applying machine learning to bioinformatics. From 2002 to 2003 he was a research scientist at the Max Planck Institute for Biological Cybernetics, Tuebingen, Germany. From 2004 to June 2009 he was a research staff member at NEC Labs America, Princeton. From July 2009 onwards he has been a research scientist at Google, New York. Jason Weston's current research focuses on various aspects of statistical machine learning and its applications, particularly in text and images.
Sumit Chopra (Facebook Inc)
Antoine Bordes (Facebook AI Research)
More from the Same Authors
2020 Workshop: Wordplay: When Language Meets Games »
Prithviraj Ammanabrolu · Matthew Hausknecht · Xingdi Yuan · Marc-Alexandre Côté · Adam Trischler · Kory Mathewson · John Urbanek · Jason Weston · Mark Riedl
2017 Workshop: Conversational AI - today's practice and tomorrow's potential »
Alborz Geramifard · Jason Williams · Larry Heck · James Glass · Antoine Bordes · Steve Young · Gerald Tesauro
2017 Poster: Fader Networks:Manipulating Images by Sliding Attributes »
Guillaume Lample · Neil Zeghidour · Nicolas Usunier · Antoine Bordes · Ludovic DENOYER · Marc'Aurelio Ranzato
2016 Workshop: Let's Discuss: Learning Methods for Dialogue »
Hal Daumé III · Paul Mineiro · Amanda Stent · Jason E Weston
2016 Workshop: Cognitive Computation: Integrating Neural and Symbolic Approaches »
Tarek R. Besold · Antoine Bordes · Gregory Wayne · Artur Garcez
2016 Poster: Dialog-based Language Learning »
Jason E Weston
2015 Poster: End-To-End Memory Networks »
Sainbayar Sukhbaatar · arthur szlam · Jason Weston · Rob Fergus
2015 Oral: End-To-End Memory Networks »
Sainbayar Sukhbaatar · arthur szlam · Jason Weston · Rob Fergus
2014 Workshop: 4th Workshop on Automated Knowledge Base Construction (AKBC) »
Sameer Singh · Fabian M Suchanek · Sebastian Riedel · Partha Pratim Talukdar · Kevin P Murphy · Christopher Ré · William Cohen · Tom Mitchell · Andrew McCallum · Jason E Weston · Ramanathan Guha · Boyan Onyshkevych · Hoifung Poon · Oren Etzioni · Ari Kobren · Arvind Neelakantan · Peter Clark
2014 Workshop: Learning Semantics »
Cedric Archambeau · Antoine Bordes · Leon Bottou · Chris J Burges · David Grangier
2011 Workshop: Learning Semantics »
Antoine Bordes · Jason E Weston · Ronan Collobert · Leon Bottou
2010 Poster: Label Embedding Trees for Large Multi-Class Tasks »
Samy Bengio · Jason E Weston · David Grangier
2009 Poster: Polynomial Semantic Indexing »
Bing Bai · Jason E Weston · David Grangier · Ronan Collobert · Kunihiko Sadamasa · Yanjun Qi · Corinna Cortes · Mehryar Mohri
2009 Tutorial: Deep Learning in Natural Language Processing »
Ronan Collobert · Jason E Weston