Timezone: »

Robust Reinforcement Learning for Shifting Dynamics During Deployment
Samuel Stanton · Rasool Fakoor · Jonas Mueller · Andrew Gordon Wilson · Alexander Smola

While high-return policies can be learned on a wide range of systems through reinforcement learning, actual deployment of the resulting policies is often hindered by their sensitivity to future changes in the environment. Adversarial training has shown some promise in producing policies that retain better performance under environment shifts, but existing approaches only consider robustness to specific kinds of perturbations that must be specified a priori. As possible changes in future dynamics are typically unknown in practice, we instead seek a policy that is robust to a variety of realistic changes only encountered at test-time. Towards this goal, we propose a new adversarial variant of soft actor-critic, which produces policies on Mujoco continuous control tasks that are simultaneously more robust across various environment shifts, such as changes to friction and body mass.

Author Information

Samuel Stanton (New York University)

ML Scientist at Genentech Early Research and Development (gRED). Building ML systems for scientific discovery in biotech.

Rasool Fakoor (Amazon Web Services)
Jonas Mueller (Amazon Web Services)
Andrew Gordon Wilson (New York University)
Alexander Smola (Amazon)

More from the Same Authors