Timezone: »

Improving Textual Network Learning with Variational Homophilic Embeddings
Wenlin Wang · Chenyang Tao · Zhe Gan · Guoyin Wang · Liqun Chen · Xinyuan Zhang · Ruiyi Zhang · Qian Yang · Ricardo Henao · Lawrence Carin

Wed Dec 11 05:00 PM -- 07:00 PM (PST) @ East Exhibition Hall B + C #106

The performance of many network learning applications crucially hinges on the success of network embedding algorithms, which aim to encode rich network information into low-dimensional vertex-based vector representations. This paper considers a novel variational formulation of network embeddings, with special focus on textual networks. Different from most existing methods that optimize a discriminative objective, we introduce Variational Homophilic Embedding (VHE), a fully generative model that learns network embeddings by modeling the semantic (textual) information with a variational autoencoder, while accounting for the structural (topology) information through a novel homophilic prior design. Homophilic vertex embeddings encourage similar embedding vectors for related (connected) vertices. The VHE encourages better generalization for downstream tasks, robustness to incomplete observations, and the ability to generalize to unseen vertices. Extensive experiments on real-world networks, for multiple tasks, demonstrate that the proposed method achieves consistently superior performance relative to competing state-of-the-art approaches.

Author Information

Wenlin Wang (Duke University)
Chenyang Tao (Duke University)
Zhe Gan (Microsoft)
Guoyin Wang (Duke University)
Liqun Chen (Duke University)
Xinyuan Zhang (Duke University)
Ruiyi Zhang (Duke University)
Qian Yang (Duke University)
Ricardo Henao (Duke University)
Lawrence Carin (Duke University)

More from the Same Authors