Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Causal Representation Learning

Independent Mechanism Analysis and the Manifold Hypothesis: Identifiability and Genericity

Shubhangi Ghosh · Luigi Gresele · Julius von Kügelgen · Michel Besserve · Bernhard Schölkopf

Keywords: [ independent component analysis ] [ Genericity ] [ manifold hypothesis ] [ Representation Learning ] [ independent mechanism analysis ] [ Concentration Inequalities ] [ High-dimensional data ] [ identifiability ]


Abstract:

Independent Mechanism Analysis (IMA) seeks to address non-identifiability in nonlinear ICA by assuming that the Jacobian of the mixing function has orthogonal columns. Previous research focused on the case with equal numbers of latent components and observed mixtures, as typical in ICA. In this work, we extend IMA to model mixtures residing on a manifold within a higher-dimensional space than the latent space---in line with the manifold hypothesis in representation learning. We show that IMA circumvents several non-identifiability issues arising in this setting, suggesting that it can be beneficial even when the manifold hypothesis holds. Moreover, we prove that the IMA principle is approximately satisfied when the directions along which the latent components influence the observations are chosen independently, with probability increasing with the observed space dimensionality. This provides a new and rigorous statistical interpretation of IMA.

Chat is not available.