Embedding based Knowledge Graph (KG) Completion has gained much attention over the past few years. Most of the current algorithms consider a KG as a multidirectional labeled graph and lack the ability to capture the semantics underlying the schematic information. In a separate development, a vast amount of information has been captured within the Large Language Models (LLMs) which has revolutionized the field of Artificial Intelligence. KGs could benefit from these LLMs and vice versa. This vision paper discusses the existing algorithms for KG completion based on the variations for generating KG embeddings. It starts with discussing various KG completion algorithms such as transductive and inductive link prediction and entity type prediction algorithms. It then moves on to the algorithms utilizing type information within the KGs, LLMs, and finally to algorithms capturing the semantics represented in different description logic axioms. We conclude the paper with a critical reflection on the current state of work in the community and give recommendations for future directions.
翻译:基于嵌入的知识图谱补全在过去几年中受到广泛关注。当前大多数算法将知识图谱视为多向标记图,缺乏捕捉模式信息背后语义的能力。与此同时,海量信息被大型语言模型所捕获,这一进展彻底改变了人工智能领域。知识图谱可从这些大型语言模型中受益,反之亦然。本愿景论文基于生成知识图谱嵌入的变体,讨论了现有的知识图谱补全算法。首先探讨了多种知识图谱补全算法,包括传导式与归纳式链接预测以及实体类型预测算法。随后转向利用知识图谱内部类型信息、大型语言模型的算法,最终深入探讨能够捕捉不同描述逻辑公理所表征语义的算法。本文以对当前学界研究现状的批判性反思作结,并对未来发展方向提出建议。