Timezone: »

Policy Optimization via Importance Sampling
Alberto Maria Metelli · Matteo Papini · Francesco Faccio · Marcello Restelli

Wed Dec 05 02:00 PM -- 04:00 PM (PST) @ Room 517 AB #109

Policy optimization is an effective reinforcement learning approach to solve continuous control tasks. Recent achievements have shown that alternating online and offline optimization is a successful choice for efficient trajectory reuse. However, deciding when to stop optimizing and collect new trajectories is non-trivial, as it requires to account for the variance of the objective function estimate. In this paper, we propose a novel, model-free, policy search algorithm, POIS, applicable in both action-based and parameter-based settings. We first derive a high-confidence bound for importance sampling estimation; then we define a surrogate objective function, which is optimized offline whenever a new batch of trajectories is collected. Finally, the algorithm is tested on a selection of continuous control tasks, with both linear and deep policies, and compared with state-of-the-art policy optimization methods.

Author Information

Alberto Maria Metelli (Politecnico di Milano)
Matteo Papini (Politecnico di Milano)

Matteo Papini was born in Sondrio, Italy, on 5th July 1993. In 2015 he obtained the Bachelor Degree in Ingegneria Informatica (Computer Engineering) cum laude at Politecnico di Milano. In 2017 he obtained the Master Degree in Computer Science and Engineering - Ingegneria Informatica cum laude at Politecnico di Milano. From November 2017 he is a Ph.D. student at Dipartimento di Elettronica, Informazione e Bioingegneria (DEIB) at Politecnico di Milano. His research interests include artificial intelligence, robotics, and machine learning, with a focus on reinforcement learning.

Francesco Faccio (Politecnico di Milano - The Swiss AI Lab, IDSIA (USI & SUPSI))
Marcello Restelli (Politecnico di Milano)

Related Events (a corresponding poster, oral, or spotlight)

More from the Same Authors