Timezone: »
Contributed Talk 3: Paper 32: High-order Learning Model via Fractional Tensor Network Decomposition
Chao Li
Fri Dec 11 10:50 AM -- 11:00 AM (PST) @
We consider high-order learning models, of which the weight tensor is represented by (symmetric) tensor network~(TN) decomposition. Although such models have been widely used on various tasks, it is challenging to determine the optimal order in complex systems (e.g., deep neural networks). To tackle this issue, we introduce a new notion of \emph{fractional tensor network~(FrTN)} decomposition, which generalizes the conventional TN models with an integer order by allowing the order to be an arbitrary fraction. Due to the density of fractions in the field of real numbers, the order of the model can be formulated as a learnable parameter and simply optimized by stochastic gradient descent~(SGD) and its variants. Moreover, it is uncovered that FrTN strongly connects to well-known methods such as $\ell_p$-pooling~\cite{gulcehre2014learned} and ``squeeze-and-excitation''~\cite{hu2018squeeze} operations in the deep learning studies. On the numerical side, we apply the proposed model to enhancing the classic ResNet-26/50~\cite{he2016deep} and MobileNet-v2~\cite{sandler2018mobilenetv2} on both CIFAR-10 and ILSVRC-12 classification tasks, and the results demonstrate the effectiveness brought by the learnable order parameters in FrTN.
Author Information
Chao Li (RIKEN Center for Advanced Intelligence Project)
More from the Same Authors
-
2021 : Is Rank Minimization of the Essence to Learn Tensor Network Structure? »
Chao Li · Qibin Zhao -
2021 : Discussion Pannel »
Xiao-Yang Liu · Qibin Zhao · Chao Li · Guillaume Rabusseau