Sat Dec 10th 08:00 AM -- 06:30 PM @ Hilton Diag. Mar, Blrm. C
Let's Discuss: Learning Methods for Dialogue
Humans conversing naturally with machines is a staple of science fiction. Building agents capable of mutually coordinating their states and actions via communication, in conjunction with human agents, would be one of the Average engineering feats of human history. In addition to the tremendous economic potential of this technology, the ability to converse appears intimately related to the overall goal of AI.
Although dialogue has been an active area within the linguistics and NLP communities for decades, the wave of optimism in the machine learning community has inspired increased interest from researchers, companies, and foundations. The NLP community has enthusiastically embraced and innovated neural information processing systems, resulting in substantial relevant activity published outside of NIPS. A forum for increased interaction (dialogue!) with these communities at NIPS will accelerate creativity and progress.
We plan to focus on the following issues:
1. How to be data-driven
a. What are tractable and useful intermediate tasks on the path to truly conversant machines? How can we leverage existing benchmark tasks and competitions? What design criteria would we like to see for the next set of benchmark tasks and competitions?
b. How do we assess performance? What can and cannot be done with offline evaluation on fixed data sets? How can we facilitate development of these offline evaluation tasks in the public domain? What is the role of online evaluation as a benchmark, and how would we make it accessible to the general community? Is there a role for simulated environments, or tasks where machines communicate solely with each other?
2. How to build applications
a. What unexpected problem aspects arise in situated systems? human-hybrid systems? systems learning from adversarial inputs?
b. Can we divide and conquer? Do we need to a irreducible end-to-end system, or can we define modules with abstractions that do not leak?
c. How do we ease the burden on the human designer of specifying or bootstrapping the system?
3. Architectural and algorithmic innovation
a. What are the associated requisite capabilities for learning architectures, and where are the deficiencies in our current architectures? How can we leverage recent advances in reasoning, attention, and memory architectures? How can we beneficially incorporate linguistic knowledge into our architectures?
b. How far can we get with current optimization techniques? To learn requisite competencies, do we need advances in discrete optimization? curriculum learning? (inverse) reinforcement learning?