Skip to yearly menu bar Skip to main content


Poster

Cross-Linked Unified Embedding for cross-modality representation learning

Xinming Tu · Zhi-Jie Cao · xia chenrui · Sara Mostafavi · Ge Gao

Hall J (level 1) #215

Keywords: [ Representation Learning ] [ Single-cell Genomics ] [ Computational Biology and Bioinformatics ] [ Deep Autoencoders ] [ Semi-Supervised Learning ] [ Multimodal Learning ]


Abstract:

Multi-modal learning is essential for understanding information in the real world. Jointly learning from multi-modal data enables global integration of both shared and modality-specific information, but current strategies often fail when observa- tions from certain modalities are incomplete or missing for part of the subjects. To learn comprehensive representations based on such modality-incomplete data, we present a semi-supervised neural network model called CLUE (Cross-Linked Unified Embedding). Extending from multi-modal VAEs, CLUE introduces the use of cross-encoders to construct latent representations from modality-incomplete observations. Representation learning for modality-incomplete observations is common in genomics. For example, human cells are tightly regulated across multi- ple related but distinct modalities such as DNA, RNA, and protein, jointly defining a cell’s function. We benchmark CLUE on multi-modal data from single cell measurements, illustrating CLUE’s superior performance in all assessed categories of the NeurIPS 2021 Multimodal Single-cell Data Integration Competition. While we focus on analysis of single cell genomic datasets, we note that the proposed cross-linked embedding strategy could be readily applied to other cross-modality representation learning problems.

Chat is not available.