Timezone: »

Replacing Rewards with Examples: Example-Based Policy Search via Recursive Classification
Ben Eysenbach · Sergey Levine · Russ Salakhutdinov

Fri Dec 10 04:20 PM -- 04:35 PM (PST) @

Reinforcement learning (RL) algorithms assume that users specify tasks by manually writing down a reward function. However, this process can be laborious and demands considerable technical expertise. Can we devise RL algorithms that instead enable users to specify tasks simply by providing examples of successful outcomes? In this paper, we derive a control algorithm that maximizes the future probability of these successful outcome examples. Prior work has approached similar problems with a two-stage process, first learning a reward function and then optimizing this reward function using another reinforcement learning algorithm. In contrast, our method directly learns a value function from transitions and successful outcomes, without learning this intermediate reward function. Our method therefore requires fewer hyperparameters to tune and lines of code to debug. We show that our method satisfies a new data-driven Bellman equation, where examples take the place of the typical reward function term. Experiments show that our approach outperforms prior methods that learn explicit reward functions.

Author Information

Ben Eysenbach (Google AI Resident)
Sergey Levine (UC Berkeley)
Russ Salakhutdinov (Carnegie Mellon University)

Related Events (a corresponding poster, oral, or spotlight)

More from the Same Authors