NIPS 2009
Skip to yearly menu bar Skip to main content


Workshop

Kernels for Multiple Outputs and Multi-task Learning: Frequentist and Bayesian Points of View

Mauricio A Alvarez · Lorenzo Rosasco · Neil D Lawrence

Westin: Alpine A

Accounting for dependencies between outputs has important applications in several areas. In sensor networks, for example, missing signals from temporal failing sensors may be predicted due to correlations with signals acquired from other sensors. In geo-statistics, prediction of the concentration of heavy pollutant metals (for example, Copper concentration), that require expensive procedures to be measured, can be done using inexpensive and oversampled variables (for example, pH data).

Multi-task learning is a general learning framework in which it is assumed that learning multiple tasks simultaneously leads to better modeling results and performance that learning the same tasks individually. Exploiting correlations and dependencies among tasks, it becomes possible to handle common practical situations such as missing data or to increase the amount of potential data when only few amount of data per task is available.

In this workshop we will consider the use of kernel methods for multiple outputs and multi-task learning. The aim of the workshop is to bring together Bayesian and frequentist researchers to establish common ground and shared goals.

Live content is unavailable. Log in and register to view live content