Skip to yearly menu bar Skip to main content


Minimizing Quadratic Functions in Constant Time

Kohei Hayashi · Yuichi Yoshida

Area 5+6+7+8 #171

Keywords: [ Large Scale Learning and Big Data ] [ Kernel Methods ]

Abstract: A sampling-based optimization method for quadratic functions is proposed. Our method approximately solves the following $n$-dimensional quadratic minimization problem in constant time, which is independent of $n$: $z^*=\min_{\bv \in \bbR^n}\bracket{\bv}{A \bv} + n\bracket{\bv}{\diag(\bd)\bv} + n\bracket{\bb}{\bv}$, where $A \in \bbR^{n \times n}$ is a matrix and $\bd,\bb \in \bbR^n$ are vectors. Our theoretical analysis specifies the number of samples $k(\delta, \epsilon)$ such that the approximated solution $z$ satisfies $|z - z^*| = O(\epsilon n^2)$ with probability $1-\delta$. The empirical performance (accuracy and runtime) is positively confirmed by numerical experiments.

Live content is unavailable. Log in and register to view live content