Skip to yearly menu bar Skip to main content


Contributed Talk 3
in
Workshop: Differential Geometry meets Deep Learning (DiffGeo4DL)

Contributed Talk 3: A Riemannian gradient flow perspective on learning deep linear neural networks

Ulrich Terstiege · Holger Rauhut · Bubacarr Bah · Michael Westdickenberg


Abstract: We study the convergence of gradient flows related to learning deep linear neural networks from data. In this case, the composition of the network layers amounts to simply multiplying the weight matrices of all layers together, resulting in an overparameterized problem. The gradient flow with respect to these factors can be re-interpreted as a Riemannian gradient flow on the manifold of rank-$r$ matrices endowed with a suitable Riemannian metric. We show that the flow always converges to a critical point of the underlying functional. Moreover, we establish that, for almost all initializations, the flow converges to a global minimum on the manifold of rank $k$ matrices for some $k\leq r$.

Chat is not available.