Skip to yearly menu bar Skip to main content


Poster

Distributed Flexible Nonlinear Tensor Factorization

Shandian Zhe · Kai Zhang · Pengyuan Wang · Kuang-chih Lee · Zenglin Xu · Yuan Qi · Zoubin Ghahramani

Area 5+6+7+8 #140

Keywords: [ Gaussian Processes ] [ (Other) Bayesian Inference ] [ Large Scale Learning and Big Data ] [ Bayesian Nonparametrics ] [ Matrix Factorization ]


Abstract:

Tensor factorization is a powerful tool to analyse multi-way data. Recently proposed nonlinear factorization methods, although capable of capturing complex relationships, are computationally quite expensive and may suffer a severe learning bias in case of extreme data sparsity. Therefore, we propose a distributed, flexible nonlinear tensor factorization model, which avoids the expensive computations and structural restrictions of the Kronecker-product in the existing TGP formulations, allowing an arbitrary subset of tensor entries to be selected for training. Meanwhile, we derive a tractable and tight variational evidence lower bound (ELBO) that enables highly decoupled, parallel computations and high-quality inference. Based on the new bound, we develop a distributed, key-value-free inference algorithm in the MapReduce framework, which can fully exploit the memory cache mechanism in fast MapReduce systems such as Spark. Experiments demonstrate the advantages of our method over several state-of-the-art approaches, in terms of both predictive performance and computational efficiency.

Live content is unavailable. Log in and register to view live content