Link prediction models can benefit from incorporating textual descriptions of entities and relations, enabling fully inductive learning and flexibility in dynamic graphs. We address the challenge of also capturing rich structured information about the local neighbourhood of entities and their relations, by introducing a Transformer-based approach that effectively integrates textual descriptions with graph structure, reducing the reliance on resource-intensive text encoders. Our experiments on three challenging datasets show that our Fast-and-Frugal Text-Graph (FnF-TG) Transformers achieve superior performance compared to the previous state-of-the-art methods, while maintaining efficiency and scalability.
翻译:链接预测模型通过融入实体和关系的文本描述,可实现完全归纳学习并适应动态图的灵活性。我们通过引入一种基于Transformer的方法来解决同时捕获实体及其关系局部邻域丰富结构化信息的挑战,该方法有效整合了文本描述与图结构,降低了对资源密集型文本编码器的依赖。我们在三个具有挑战性的数据集上的实验表明,我们的快速节俭文本-图(FnF-TG)Transformer相较于先前最先进方法实现了更优的性能,同时保持了高效性与可扩展性。