Poster
Near Optimal Sketching of Low-Rank Tensor Regression
Xingguo Li · Jarvis Haupt · David Woodruff
Pacific Ballroom #207
Keywords: [ Sparsity and Compressed Sensing ] [ Regression ] [ Computational Complexity ]
[
Abstract
]
Abstract:
We study the least squares regression problem minΘ∈\RRp1×⋯×pD‖\cA(Θ)−b‖22, where Θ is a low-rank tensor, defined as Θ=∑Rr=1θ(r)1∘⋯∘θ(r)D, for vectors θ(r)d∈Rpd for all r∈[R] and d∈[D]. %R is small compared with p1,…,pD, Here, ∘ denotes the outer product of vectors, and \cA(Θ) is a linear function on Θ. This problem is motivated by the fact that the number of parameters in Θ is only R⋅∑Dd=1pD, which is significantly smaller than the ∏Dd=1pd number of parameters in ordinary least squares regression. We consider the above CP decomposition model of tensors Θ, as well as the Tucker decomposition. For both models we show how to apply data dimensionality reduction techniques based on {\it sparse} random projections Φ∈\RRm×n, with m≪n, to reduce the problem to a much smaller problem minΘ‖Φ\cA(Θ)−Φb‖22, for which ‖Φ\cA(Θ)−Φb‖22=(1±ε)‖\cA(Θ)−b‖22 holds simultaneously for all Θ. We obtain a significantly smaller dimension and sparsity in the randomized linear mapping Φ than is possible for ordinary least squares regression. Finally, we give a number of numerical simulations supporting our theory.
Live content is unavailable. Log in and register to view live content