Skip to yearly menu bar Skip to main content


Poster

Recovery Guarantee of Non-negative Matrix Factorization via Alternating Updates

Yuanzhi Li · Yingyu Liang · Andrej Risteski

Area 5+6+7+8 #79

Keywords: [ Matrix Factorization ] [ Learning Theory ]


Abstract:

Non-negative matrix factorization is a popular tool for decomposing data into feature and weight matrices under non-negativity constraints. It enjoys practical success but is poorly understood theoretically. This paper proposes an algorithm that alternates between decoding the weights and updating the features, and shows that assuming a generative model of the data, it provably recovers the ground-truth under fairly mild conditions. In particular, its only essential requirement on features is linear independence. Furthermore, the algorithm uses ReLU to exploit the non-negativity for decoding the weights, and thus can tolerate adversarial noise that can potentially be as large as the signal, and can tolerate unbiased noise much larger than the signal. The analysis relies on a carefully designed coupling between two potential functions, which we believe is of independent interest.

Live content is unavailable. Log in and register to view live content