Skip to yearly menu bar Skip to main content


Poster

Minimizing Quadratic Functions in Constant Time

Kohei Hayashi · Yuichi Yoshida

Area 5+6+7+8 #171

Keywords: [ Kernel Methods ] [ Large Scale Learning and Big Data ]


Abstract: A sampling-based optimization method for quadratic functions is proposed. Our method approximately solves the following nn-dimensional quadratic minimization problem in constant time, which is independent of nn: z=min\bv\bbRn\bracket\bvA\bv+n\bracket\bv\diag(\bd)\bv+n\bracket\bb\bvz=min\bv\bbRn\bracket\bvA\bv+n\bracket\bv\diag(\bd)\bv+n\bracket\bb\bv, where A\bbRn×nA\bbRn×n is a matrix and \bd,\bb\bbRn\bd,\bb\bbRn are vectors. Our theoretical analysis specifies the number of samples k(δ,ϵ)k(δ,ϵ) such that the approximated solution zz satisfies |zz|=O(ϵn2)|zz|=O(ϵn2) with probability 1δ1δ. The empirical performance (accuracy and runtime) is positively confirmed by numerical experiments.

Live content is unavailable. Log in and register to view live content