Skip to yearly menu bar Skip to main content


Poster

Deep Reinforcement Learning from Human Preferences

Paul Christiano · Jan Leike · Tom Brown · Miljan Martic · Shane Legg · Dario Amodei

Pacific Ballroom #1

Keywords: [ Robotics ] [ Reinforcement Learning ] [ Decision and Control ] [ Game Playing ] [ Ranking and Preference Learning ]


Abstract:

For sophisticated reinforcement learning (RL) systems to interact usefully with real-world environments, we need to communicate complex goals to these systems. In this work, we explore goals defined in terms of (non-expert) human preferences between pairs of trajectory segments. Our approach separates learning the goal from learning the behavior to achieve it. We show that this approach can effectively solve complex RL tasks without access to the reward function, including Atari games and simulated robot locomotion, while providing feedback on about 0.1% of our agent's interactions with the environment. This reduces the cost of human oversight far enough that it can be practically applied to state-of-the-art RL systems. To demonstrate the flexibility of our approach, we show that we can successfully train complex novel behaviors with about an hour of human time. These behaviors and environments are considerably more complex than any which have been previously learned from human feedback.

Live content is unavailable. Log in and register to view live content