Timezone: »

 
Spotlight
Bayesian Optimistic Optimization: Optimistic Exploration for Model-based Reinforcement Learning
Chenyang Wu · Tianci Li · Zongzhang Zhang · Yang Yu

Wed Dec 07 05:00 PM -- 07:00 PM (PST) @

Reinforcement learning (RL) is a general framework for modeling sequential decision making problems, at the core of which lies the dilemma of exploitation and exploration. An agent failing to explore systematically will inevitably fail to learn efficiently. Optimism in the face of uncertainty (OFU) is a conventionally successful strategy for efficient exploration. An agent following the OFU principle explores actively and efficiently. However, when applied to model-based RL, it involves specifying a confidence set of the underlying model and solving a series of nonlinear constrained optimization, which can be computationally intractable. This paper proposes an algorithm, Bayesian optimistic optimization (BOO), which adopts a dynamic weighting technique for enforcing the constraint rather than explicitly solving a constrained optimization problem. BOO is a general algorithm proved to be sample-efficient for models in a finite-dimensional reproducing kernel Hilbert space. We also develop techniques for effective optimization and show through some simulation experiments that BOO is competitive with the existing algorithms.

Author Information

Chenyang Wu (Nanjing University)
Tianci Li (Nanjing University)
Zongzhang Zhang (Nanjing University)
Zongzhang Zhang

I am now an associate professor at the School of Artificial Intelligence, Nanjing University.

Yang Yu (Nanjing University)

Related Events (a corresponding poster, oral, or spotlight)

More from the Same Authors