Timezone: »

 
Poster
Weighted Likelihood Policy Search with Model Selection
Tsuyoshi Ueno · Yoshinobu Kawahara · Kohei Hayashi · Takashi Washio

Wed Dec 05 07:00 PM -- 12:00 AM (PST) @ Harrah’s Special Events Center 2nd Floor

Reinforcement learning (RL) methods based on direct policy search (DPS) have been actively discussed to achieve an efficient approach to complicated Markov decision processes (MDPs). Although they have brought much progress in practical applications of RL, there still remains an unsolved problem in DPS related to model selection for the policy. In this paper, we propose a novel DPS method, {\it weighted likelihood policy search (WLPS)}, where a policy is efficiently learned through the weighted likelihood estimation. WLPS naturally connects DPS to the statistical inference problem and thus various sophisticated techniques in statistics can be applied to DPS problems directly. Hence, by following the idea of the {\it information criterion}, we develop a new measurement for model comparison in DPS based on the weighted log-likelihood.

Author Information

Tsuyoshi Ueno (Japan Science and Technology)
Yoshinobu Kawahara (Osaka University / RIKEN)
Kohei Hayashi (Preferred Networks)
Takashi Washio (Osaka University)

http://www.ar.sanken.osaka-u.ac.jp/~washio/washpreg.html

More from the Same Authors