Skip to yearly menu bar Skip to main content


Poster

Fast structure learning with modular regularization

Greg Ver Steeg · Hrayr Harutyunyan · Daniel Moyer · Aram Galstyan

East Exhibition Hall B, C #16

Keywords: [ Model Selection and Structure Learning ] [ Algorithms ] [ Informatio ] [ Neuroscience and Cognitive Science -> Brain Segmentation; Probabilistic Methods -> Latent Variable Models; Theory ]


Abstract:

Estimating graphical model structure from high-dimensional and undersampled data is a fundamental problem in many scientific fields. Existing approaches, such as GLASSO, latent variable GLASSO, and latent tree models, suffer from high computational complexity and may impose unrealistic sparsity priors in some cases. We introduce a novel method that leverages a newly discovered connection between information-theoretic measures and structured latent factor models to derive an optimization objective which encourages modular structures where each observed variable has a single latent parent. The proposed method has linear stepwise computational complexity w.r.t. the number of observed variables. Our experiments on synthetic data demonstrate that our approach is the only method that recovers modular structure better as the dimensionality increases. We also use our approach for estimating covariance structure for a number of real-world datasets and show that it consistently outperforms state-of-the-art estimators at a fraction of the computational cost. Finally, we apply the proposed method to high-resolution fMRI data (with more than 10^5 voxels) and show that it is capable of extracting meaningful patterns.

Live content is unavailable. Log in and register to view live content