Skip to yearly menu bar Skip to main content


Poster

Efficient algorithms for learning kernels from multiple similarity matrices with general convex loss functions

Achintya Kundu · Vikram Mukunda Rao Tankasali · Chiranjib Bhattacharyya · Aharon Ben-Tal


Abstract:

In this paper we consider the problem of learning an n x n Kernel matrix from m similarity matrices under general convex loss. Past research have extensively studied the m =1 case and have derived several algorithms which require sophisticated techniques like ACCP, SOCP, etc. The existing algorithms do not apply if one uses arbitrary losses and often can not handle m > 1 case. We present several provably convergent iterative algorithms, where each iteration requires either an SVM or a Multiple Kernel Learning (MKL) solver for m > 1 case. One of the major contributions of the paper is to extend the well known Mirror Descent(MD) framework to handle Cartesian product of psd matrices. This novel extension leads to an algorithm, called EMKL, which solves the problem in O(m^2 log n) iterations; in each iteration one solves an MKL involving m kernels and m eigen-decomposition of n x n matrices. By suitably defining a restriction on the objective function, a faster version of EMKL is proposed, called REKL, which avoids the eigen-decomposition. An alternative to both EMKL and REKL is also suggested which requires only an SVM solver. Experimental results on real world protein data set involving several similarity matrices illustrate the efficacy of the proposed algorithms.

Live content is unavailable. Log in and register to view live content