`

Timezone: »

 
Poster
Capacity and Bias of Learned Geometric Embeddings for Directed Graphs
Michael Boratko · Dongxu Zhang · Nicholas Monath · Luke Vilnis · Kenneth L Clarkson · Andrew McCallum

Fri Dec 10 08:30 AM -- 10:00 AM (PST) @ None #None

A wide variety of machine learning tasks such as knowledge base completion, ontology alignment, and multi-label classification can benefit from incorporating into learning differentiable representations of graphs or taxonomies. While vectors in Euclidean space can theoretically represent any graph, much recent work shows that alternatives such as complex, hyperbolic, order, or box embeddings have geometric properties better suited to modeling real-world graphs. Experimentally these gains are seen only in lower dimensions, however, with performance benefits diminishing in higher dimensions. In this work, we introduce a novel variant of box embeddings that uses a learned smoothing parameter to achieve better representational capacity than vector models in low dimensions, while also avoiding performance saturation common to other geometric models in high dimensions. Further, we present theoretical results that prove box embeddings can represent any DAG. We perform rigorous empirical evaluations of vector, hyperbolic, and region-based geometric representations on several families of synthetic and real-world directed graphs. Analysis of these results exposes correlations between different families of graphs, graph characteristics, model size, and embedding geometry, providing useful insights into the inductive biases of various differentiable graph representations.

Author Information

Michael Boratko (UMass Amherst)
Dongxu Zhang (University of Massachusetts Amherst)
Nicholas Monath (University of Massachusetts Amherst)
Luke Vilnis (University of Massachusetts, Amherst)
Kenneth L Clarkson (IBM Almaden)
Andrew McCallum (UMass Amherst)

More from the Same Authors