Timezone: »

Simple random search of static linear policies is competitive for reinforcement learning
Horia Mania · Aurelia Guy · Benjamin Recht

Wed Dec 05 02:00 PM -- 04:00 PM (PST) @ Room 517 AB #111

Model-free reinforcement learning aims to offer off-the-shelf solutions for controlling dynamical systems without requiring models of the system dynamics. We introduce a model-free random search algorithm for training static, linear policies for continuous control problems. Common evaluation methodology shows that our method matches state-of-the-art sample efficiency on the benchmark MuJoCo locomotion tasks. Nonetheless, more rigorous evaluation reveals that the assessment of performance on these benchmarks is optimistic. We evaluate the performance of our method over hundreds of random seeds and many different hyperparameter configurations for each benchmark task. This extensive evaluation is possible because of the small computational footprint of our method. Our simulations highlight a high variability in performance in these benchmark tasks, indicating that commonly used estimations of sample efficiency do not adequately evaluate the performance of RL algorithms. Our results stress the need for new baselines, benchmarks and evaluation methodology for RL algorithms.

Author Information

Horia Mania (UC Berkeley)
Aurelia Guy (UC Berkeley)
Benjamin Recht (UC Berkeley)

More from the Same Authors