Prof. Dr. Emmanuel Müller


Embedding a web-scale information network into a low-dimensional vector space facilitates tasks such as link prediction, classification, and visualization. Past research has addressed the problem of extracting such embeddings by adopting methods from words to graphs, without defining a clearly comprehensible graph-related objective. Yet, as we show, the objectives used in past works implicitly utilize similarity measures among graph nodes. We carry the similarity orientation of previous works to its logical conclusion; we propose VERtex Similarity Embeddings (VERSE), a simple, versatile, and memory-efficient method that derives graph embeddings explicitly calibrated to preserve the distributions of a selected vertex-to-vertex similarity measure.

VERSE learns such embeddings by training a single-layer neural network. While its default, scalable version does so via sampling similarity information, we also develop a variant using the full information per vertex. Our experimental study on standard benchmarks and real-world datasets demonstrates that VERSE, instantiated with diverse similarity measures, outperforms state-of-the-art methods in terms of precision and recall in major data mining tasks and supersedes them in time and space efficiency, while the scalable sampling-based variant achieves equally good result as the non-scalable full variant.

In this paper:

  • We provide new useful abstraction for graph embeddings: node similarities.
  • We create VERSE to explicitly work with node similarities.
  • We show how previous works implicitly utilize similarities.
  • We show that negative sampling can not preserve similarities, instead, we develop an approximation with Noise Contrastive Estimation.
Three node properties are highlighted on the same graph. Can a single model capture these properties?

VERSE: Versatile Graph Embeddings from Similarity Measures

Anton Tsitsulin, Davide Mottin, Panagiotis Karras, Emmanuel Müller

In Proceedings of the 2018 World Wide Web Conference on World Wide Web, WWW 2018, Lyon, France.