Skip to yearly menu bar Skip to main content


Poster

Real-Valued Backpropagation is Unsuitable for Complex-Valued Neural Networks

Zhi-Hao Tan · Yi Xie · Yuan Jiang · Zhi-Hua Zhou

Hall J (level 1) #603

Keywords: [ Neural Tangent Kernel ] [ complex backpropagation ] [ complex-valued neural network ]


Abstract:

Recently complex-valued neural networks have received increasing attention due to successful applications in various tasks and the potential advantages of better theoretical properties and richer representational capacity. However, the training dynamics of complex networks compared to real networks remains an open problem. In this paper, we investigate the dynamics of deep complex networks during real-valued backpropagation in the infinite-width limit via neural tangent kernel (NTK). We first extend the Tensor Program to the complex domain, to show that the dynamics of any basic complex network architecture is governed by its NTK under real-valued backpropagation. Then we propose a way to investigate the comparison of training dynamics between complex and real networks by studying their NTKs. As a result, we surprisingly prove that for most complex activation functions, the commonly used real-valued backpropagation reduces the training dynamics of complex networks to that of ordinary real networks as the widths tend to infinity, thus eliminating the characteristics of complex-valued neural networks. Finally, the experiments validate our theoretical findings numerically.

Chat is not available.