Entity Alignment (EA) seeks to identify and match corresponding entities across different Knowledge Graphs (KGs), playing a crucial role in knowledge fusion and integration. Embedding-based entity alignment (EA) has recently gained considerable attention, resulting in the emergence of many innovative approaches. Initially, these approaches concentrated on learning entity embeddings based on the structural features of knowledge graphs (KGs) as defined by relation triples. Subsequent methods have integrated entities' names and attributes as supplementary information to improve the embeddings used for EA. However, existing methods lack a deep semantic understanding of entity attributes and relations. In this paper, we propose a Large Language Model (LLM) based Entity Alignment method, LLM-Align, which explores the instruction-following and zero-shot capabilities of Large Language Models to infer alignments of entities. LLM-Align uses heuristic methods to select important attributes and relations of entities, and then feeds the selected triples of entities to an LLM to infer the alignment results. To guarantee the quality of alignment results, we design a multi-round voting mechanism to mitigate the hallucination and positional bias issues that occur with LLMs. Experiments on three EA datasets, demonstrating that our approach achieves state-of-the-art performance compared to existing EA methods.
翻译:实体对齐(EA)旨在识别并匹配不同知识图谱(KG)中的对应实体,在知识融合与集成中起着关键作用。基于嵌入的实体对齐方法近年来受到广泛关注,涌现出许多创新方法。早期方法主要侧重于依据关系三元组定义的知识图谱结构特征来学习实体嵌入;后续方法则引入实体名称与属性作为补充信息,以优化对齐所用嵌入表示。然而,现有方法对实体属性及关系的深层语义理解仍显不足。本文提出一种基于大型语言模型(LLM)的实体对齐方法LLM-Align,该方法利用大型语言模型的指令跟随与零样本推理能力来推断实体对齐关系。LLM-Align通过启发式方法筛选实体的关键属性与关系,随后将选取的实体三元组输入LLM以推导对齐结果。为保障对齐质量,我们设计了多轮投票机制以缓解LLM可能出现的幻觉问题与位置偏差。在三个实体对齐数据集上的实验表明,相较于现有方法,本方法取得了最先进的性能表现。