Embedding methods transform the knowledge graph into a continuous, low-dimensional space, facilitating inference and completion tasks. Existing methods are mainly divided into two types: translational distance models and semantic matching models. A key challenge in translational distance models is their inability to effectively differentiate between 'head' and 'tail' entities in graphs. To address this problem, a novel location-sensitive embedding (LSE) method has been developed. LSE innovatively modifies the head entity using relation-specific mappings, conceptualizing relations as linear transformations rather than mere translations. The theoretical foundations of LSE, including its representational capabilities and its connections to existing models, have been thoroughly examined. A more streamlined variant, LSEd, which employs a diagonal matrix for transformations to enhance practical efficiency, is also proposed. Experiments conducted on four large-scale KG datasets for link prediction show that LSEd either outperforms or is competitive with state-of-the-art related works.
翻译:嵌入方法将知识图谱转化为连续的低维空间,从而促进推理与补全任务。现有方法主要分为两类:平移距离模型与语义匹配模型。平移距离模型的一个关键挑战在于其无法有效区分图谱中的“头”实体与“尾”实体。为解决此问题,本文提出了一种新颖的位置敏感嵌入方法。该方法创新性地通过关系特定的映射对头实体进行变换,将关系概念化为线性变换而非简单的平移。本文深入探讨了位置敏感嵌入的理论基础,包括其表示能力及其与现有模型的关联。同时提出了一种更精简的变体——位置敏感嵌入对角化,该变体采用对角矩阵进行变换以提升实际效率。在四个大规模知识图谱数据集上进行的链接预测实验表明,位置敏感嵌入对角化在性能上优于或可与当前最先进的相关工作相媲美。