Processing math: 100%
Skip to yearly menu bar Skip to main content


Poster

Near Optimal Sketching of Low-Rank Tensor Regression

Xingguo Li · Jarvis Haupt · David Woodruff

Pacific Ballroom #207

Keywords: [ Sparsity and Compressed Sensing ] [ Regression ] [ Computational Complexity ]


Abstract: We study the least squares regression problem minΘ\RRp1××pD\cA(Θ)b22, where Θ is a low-rank tensor, defined as Θ=Rr=1θ(r)1θ(r)D, for vectors θ(r)dRpd for all r[R] and d[D]. %R is small compared with p1,,pD, Here, denotes the outer product of vectors, and \cA(Θ) is a linear function on Θ. This problem is motivated by the fact that the number of parameters in Θ is only RDd=1pD, which is significantly smaller than the Dd=1pd number of parameters in ordinary least squares regression. We consider the above CP decomposition model of tensors Θ, as well as the Tucker decomposition. For both models we show how to apply data dimensionality reduction techniques based on {\it sparse} random projections Φ\RRm×n, with mn, to reduce the problem to a much smaller problem minΘΦ\cA(Θ)Φb22, for which Φ\cA(Θ)Φb22=(1±ε)\cA(Θ)b22 holds simultaneously for all Θ. We obtain a significantly smaller dimension and sparsity in the randomized linear mapping Φ than is possible for ordinary least squares regression. Finally, we give a number of numerical simulations supporting our theory.

Live content is unavailable. Log in and register to view live content