Knowledge graph embeddings that generate vector space representations of knowledge graph triples, have gained considerable popularity in past years. Several embedding models have been proposed that achieve state-of-the-art performance for the task of triple completion in knowledge graphs. Relying on the presumed semantic capabilities of the learned embeddings, they have been leveraged for various other tasks such as entity typing, rule mining and conceptual clustering. However, a critical analysis of the utility as well as limitations of these embeddings for semantic representation of the underlying entities and relations has not been performed by previous work. In this work, we perform a systematic evaluation of popular knowledge graph embedding models to obtain a better understanding of their semantic capabilities as compared to a non-embedding based approach.
Further details can be found in the associated publication at ESWC 2021 conference.