Current Multimodal Knowledge Graph Construction (MKGC) models struggle with the real-world dynamism of continuously emerging entities and relations, often succumbing to catastrophic forgetting-loss of previously acquired knowledge. This study introduces benchmarks aimed at fostering the development of the continual MKGC domain. We further introduce MSPT framework, designed to surmount the shortcomings of existing MKGC approaches during multimedia data processing. MSPT harmonizes the retention of learned knowledge (stability) and the integration of new data (plasticity), outperforming current continual learning and multimodal methods. Our results confirm MSPT's superior performance in evolving knowledge environments, showcasing its capacity to navigate balance between stability and plasticity.
翻译:当前的多模态知识图谱构建模型难以应对现实世界中不断涌现的实体和关系的动态变化,常常遭受灾难性遗忘——即丢失先前已习得的知识。本研究引入了一系列基准测试,旨在促进持续多模态知识图谱构建领域的发展。我们进一步提出了MSPT框架,旨在克服现有MKGC方法在处理多媒体数据时的不足。MSPT协调了已学知识的保持与对新数据的整合,在稳定性和可塑性之间取得了平衡,其性能超越了当前的持续学习与多模态方法。我们的实验结果证实了MSPT在不断演变的知识环境中的卓越性能,展示了其在稳定性和可塑性之间实现平衡的能力。