Timezone: »

Towards Geometric Understanding of Low-Rank Approximation
Mahito Sugiyama · Kazu Ghalamkari

Fri Dec 11 12:00 PM -- 01:00 PM (PST) @ None

Rank reduction of matrices has been widely studied in linear algebra. However, its geometric understanding is limited and theoretical connection to statistical models remains unrevealed. We tackle this problem using information geometry and present a geometric unified view of matrix rank reduction. Our key idea is to treat each matrix as a probability distribution represented by the log-linear model on a partially ordered set (poset), which enables us to formulate rank reduction as projection onto a statistical submanifold, which corresponds to the set of low-rank matrices. This geometric view enables us to derive a novel efficient rank-1 reduction method, called Legendre rank-1 reduction, which analytically solves mean-field approximation and minimizes the KL divergence from a given matrix.

Author Information

Mahito Sugiyama (National Institute of Informatics)
Kazu Ghalamkari (National Institute of Informatics)

I am engaged in research to develop machine learning theory based on information geometry. My background in master's course was physics. So I am intensely interested in the interdisciplinary academic field of intelligent informatics and physics. When I can describe things in machine learning with physics terms, I am thrilled. In NeurIPS2021, I will introduce a novel tensor decomposition method, called Legendre Tucker Rank reduction (LTR). In this paper, I pointed out that the tensor rank-1 approximation is a mean-field approximation, a widely known physics methodology for reducing many-body problems to one-body problems. More details are on my personal page. gkazu.info

More from the Same Authors