Relational learning is an essential task in the domain of knowledge representation, particularly in knowledge graph completion (KGC). While relational learning in traditional single-modal settings has been extensively studied, exploring it within a multimodal KGC context presents distinct challenges and opportunities. One of the major challenges is inference on newly discovered relations without any associated training data. This zero-shot relational learning scenario poses unique requirements for multimodal KGC, i.e., utilizing multimodality to facilitate relational learning.However, existing works fail to support the leverage of multimodal information and leave the problem unexplored. In this paper, we propose a novel end-to-end framework, consisting of three components, i.e., multimodal learner, structure consolidator, and relation embedding generator, to integrate diverse multimodal information and knowledge graph structures to facilitate the zero-shot relational learning. Evaluation results on three multimodal knowledge graphs demonstrate the superior performance of our proposed method.
翻译:关系学习是知识表示领域的一项核心任务,尤其在知识图谱补全中至关重要。尽管传统单模态场景下的关系学习已得到广泛研究,但在多模态知识图谱补全背景下探索该任务则带来了独特的挑战与机遇。主要挑战之一在于对无任何相关训练数据的新发现关系进行推理。这种零样本关系学习场景对多模态知识图谱补全提出了特殊需求,即利用多模态信息促进关系学习。然而,现有研究未能支持多模态信息的有效利用,使得该问题尚未得到探索。本文提出一种新颖的端到端框架,包含多模态学习器、结构整合器和关系嵌入生成器三个组件,通过整合多样化的多模态信息与知识图谱结构来促进零样本关系学习。在三个多模态知识图谱上的评估结果表明,我们所提方法具有优越性能。