Timezone: »
Capturing similarity among cells is the core of many tasks in single-cell transcriptomics, such as the identification of cell types and cell states. This problem can be formulated in a paradigm called metric learning. Metric learning aims to learn data embeddings (feature vectors) in a way that reduces the distance between feature vectors corresponding to cells belonging to the same cell type and increases the distance between the feature vectors corresponding to different cell types. Deep metric learning on the other hand uses neural networks to automatically learn discriminative features from the cells and then compute the metric. The (deep) metric learning approaches have been successfully applied to computational biology tasks like similar cell identification and synthesis of heterogeneous single-cell modalities. We identify two computational challenges: precise distance measurement between cells and scalability over a large amount of data in the applications of (deep) metric learning. And then we propose our solutions, optimal transport and coreset optimization. Empirical studies in image retrieval and clustering tasks show the promise of the proposed approaches. We propose to further explore the applicability of our methods to cell representation learning.
Author Information
Jason Xiaotian Dou (University of Pittsburgh)
Minxue Jia (University of Pittsburgh)
Nika Zaslavsky (Carnegie Mellon University)
Haiyi Mao (University of Pittsburgh)
Runxue Bao (University of Pittsburgh)
Ni Ke
Paul Pu Liang (Carnegie Mellon University)
Zhi-Hong Mao (University of Pittsburgh)
More from the Same Authors
-
2020 : Learning in Low Resource Modalities via Cross-Modal Generalization »
Paul Pu Liang -
2021 : MultiBench: Multiscale Benchmarks for Multimodal Representation Learning »
Paul Pu Liang · Yiwei Lyu · Xiang Fan · Zetian Wu · Yun Cheng · Jason Wu · Leslie (Yufan) Chen · Peter Wu · Michelle A. Lee · Yuke Zhu · Ruslan Salakhutdinov · Louis-Philippe Morency -
2022 : MultiViz: Towards Visualizing and Understanding Multimodal Models »
Paul Pu Liang · · Gunjan Chhablani · Nihal Jain · Zihao Deng · Xingbo Wang · Louis-Philippe Morency · Ruslan Salakhutdinov -
2022 : Nano: Nested Human-in-the-Loop Reward Learning for Controlling Distribution of Generated Text »
Xiang Fan · · Paul Pu Liang · Ruslan Salakhutdinov · Louis-Philippe Morency -
2020 Workshop: First Workshop on Quantum Tensor Networks in Machine Learning »
Xiao-Yang Liu · Qibin Zhao · Jacob Biamonte · Cesar F Caiafa · Paul Pu Liang · Nadav Cohen · Stefan Leichenauer -
2019 : Extended Poster Session »
Travis LaCroix · Marie Ossenkopf · Mina Lee · Nicole Fitzgerald · Daniela Mihai · Jonathon Hare · Ali Zaidi · Alexander Cowen-Rivers · Alana Marzoev · Eugene Kharitonov · Luyao Yuan · Tomasz Korbak · Paul Pu Liang · Yi Ren · Roberto Dessì · Peter Potash · Shangmin Guo · Tatsunori Hashimoto · Percy Liang · Julian Zubek · Zipeng Fu · Song-Chun Zhu · Adam Lerer -
2019 Poster: Deep Gamblers: Learning to Abstain with Portfolio Theory »
Liu Ziyin · Zhikang Wang · Paul Pu Liang · Russ Salakhutdinov · Louis-Philippe Morency · Masahito Ueda -
2018 : Coffee break + posters 2 »
Jan Kremer · Erik McDermott · Brandon Carter · Albert Zeyer · Andreas Krug · Paul Pu Liang · Katherine Lee · Dominika Basaj · Abelino Jimenez · Lisa Fan · Gautam Bhattacharya · Tzeviya S Fuchs · David Gifford · Loren Lugosch · Orhan Firat · Benjamin Baer · JAHANGIR ALAM · Jamin Shin · Mirco Ravanelli · Paul Smolensky · Zining Zhu · Hamid Eghbal-zadeh · Skyler Seto · Imran Sheikh · Joao Felipe Santos · Yonatan Belinkov · Nadir Durrani · Oiwi Parker Jones · Shuai Tang · André Merboldt · Titouan Parcollet · Wei-Ning Hsu · Krishna Pillutla · Ehsan Hosseini-Asl · Monica Dinculescu · Alexander Amini · Ying Zhang · Taoli Cheng · Alain Tapp -
2018 : Modeling Spatiotemporal Multimodal Language with Recurrent Multistage Fusion »
Paul Pu Liang