This is the public, feature-limited version of the conference webpage. After Registration and login please visit the full version.

Avoiding Side Effects in Complex Environments

Alex Turner, Neale Ratzlaff, Prasad Tadepalli

Spotlight presentation: Orals & Spotlights Track 14: Reinforcement Learning
on 2020-12-08T20:10:00-08:00 - 2020-12-08T20:20:00-08:00
Poster Session 3 (more posters)
on 2020-12-08T21:00:00-08:00 - 2020-12-08T23:00:00-08:00
Abstract: Reward function specification can be difficult. Rewarding the agent for making a widget may be easy, but penalizing the multitude of possible negative side effects is hard. In toy environments, Attainable Utility Preservation (AUP) avoided side effects by penalizing shifts in the ability to achieve randomly generated goals. We scale this approach to large, randomly generated environments based on Conway's Game of Life. By preserving optimal value for a single randomly generated reward function, AUP incurs modest overhead while leading the agent to complete the specified task and avoid many side effects. Videos and code are available at https://avoiding-side-effects.github.io/.

Preview Video and Chat

To see video, interact with the author and ask questions please use registration and login.