Timezone: »
Tensor train (TT) decomposition provides a space-efficient representation for higher-order tensors. Despite its advantage, we face two crucial limitations when we apply the TT decomposition to machine learning problems: the lack of statistical theory and of scalable algorithms. In this paper, we address the limitations. First, we introduce a convex relaxation of the TT decomposition problem and derive its error bound for the tensor completion task. Next, we develop a randomized optimization method, in which the time complexity is as efficient as the space complexity is. In experiments, we numerically confirm the derived bounds and empirically demonstrate the performance of our method with a real higher-order tensor.
Author Information
Masaaki Imaizumi (Institute of Statistical Mathematics / RIKEN)
Takanori Maehara (RIKEN AIP)
Kohei Hayashi (Preferred Networks)
More from the Same Authors
-
2019 Poster: Data Cleansing for Models Trained with SGD »
Satoshi Hara · Atsushi Nitanda · Takanori Maehara -
2019 Poster: Einconv: Exploring Unexplored Tensor Network Decompositions for Convolutional Neural Networks »
Kohei Hayashi · Taiki Yamaguchi · Yohei Sugawara · Shin-ichi Maeda -
2017 Poster: Fitting Low-Rank Tensors in Constant Time »
Kohei Hayashi · Yuichi Yoshida -
2017 Spotlight: Fitting Low-Rank Tensors in Constant Time »
Kohei Hayashi · Yuichi Yoshida -
2016 Poster: Minimizing Quadratic Functions in Constant Time »
Kohei Hayashi · Yuichi Yoshida -
2013 Poster: Factorized Asymptotic Bayesian Inference for Latent Feature Models »
Kohei Hayashi · Ryohei Fujimaki -
2012 Poster: Weighted Likelihood Policy Search with Model Selection »
Tsuyoshi Ueno · Yoshinobu Kawahara · Kohei Hayashi · Takashi Washio -
2011 Poster: Statistical Performance of Convex Tensor Decomposition »
Ryota Tomioka · Taiji Suzuki · Kohei Hayashi · Hisashi Kashima