`

Timezone: »

 
Mini Symposium
Partially Observable Reinforcement Learning
Marcus Hutter · Will Uther · Pascal Poupart

Thu Dec 10 01:30 PM -- 04:30 PM (PST) @ Regency E/F
Event URL: http://www.hutter1.net/ai/porlsymp.htm »

For many years, the reinforcement learning community primarily focused on sequential decision making in fully observable but unknown domains while the planning under uncertainty community focused on known but partially observable domains. Since most problems are both partially observable and (at least partially) unknown, recent years have seen a surge of interest in combining the related, but often different, algorithmic machineries developed in the two communities. The time thus seems ripe for a symposium that brings these two communities together and reviews recent advances in this convergence.

A reinforcement learning agent for a partially observable environment is often broken into two parts: 1) the inference of an environment model from data; and 2) the solution of the associated control/planning problem. There has been significant progress on both these fronts in recent years. Both linear and non-linear models of various forms can now be learned from history data. Modern POMDP solvers can also now handle some models with millions of states. This symposium brings together five active researchers in PORL research to present some state-of-the-art developments.

Author Information

Marcus Hutter (Australian National University)
Will Uther (NICTA Neville Road Research Lab)
Pascal Poupart (University of Waterloo)

More from the Same Authors