Timezone: »

Bayesian Kernel Shaping for Learning Control
Jo-Anne Ting · Mrinal Kalakrishnan · Sethu Vijayakumar · Stefan Schaal

Wed Dec 10 07:30 PM -- 12:00 AM (PST) @

In kernel-based regression learning, optimizing each kernel individually is useful when the data density, curvature of regression surfaces (or decision boundaries) or magnitude of output noise (i.e., heteroscedasticity) varies spatially. Unfortunately, it presents a complex computational problem as the danger of overfitting is high and the individual optimization of every kernel in a learning system may be overly expensive due to the introduction of too many open learning parameters. Previous work has suggested gradient descent techniques or complex statistical hypothesis methods for local kernel shaping, typically requiring some amount of manual tuning of meta parameters. In this paper, we focus on nonparametric regression and introduce a Bayesian formulation that, with the help of variational approximations, results in an EM-like algorithm for simultaneous estimation of regression and kernel parameters. The algorithm is computationally efficient (suitable for large data sets), requires no sampling, automatically rejects outliers and has only one prior to be specified. It can be used for nonparametric regression with local polynomials or as a novel method to achieve nonstationary regression with Gaussian Processes. Our methods are particularly useful for learning control, where reliable estimation of local tangent planes is essential for adaptive controllers and reinforcement learning. We evaluate our methods on several synthetic data sets and on an actual robot which learns a task-level control law.

Author Information

Jo-Anne Ting (Bosch Research)
Mrinal Kalakrishnan (University of Southern California)
Sethu Vijayakumar (University of Edinburgh)
Stefan Schaal (MPI-IS and USC)

More from the Same Authors