Modern approaches for learning on dynamic graphs have adopted the use of batches instead of applying updates one by one. The use of batches allows these techniques to become helpful in streaming scenarios where updates to graphs are received at extreme speeds. Using batches, however, forces the models to update infrequently, which results in the degradation of their performance. In this work, we suggest a decoupling strategy that enables the models to update frequently while using batches. By decoupling the core modules of temporal graph networks and implementing them using a minimal number of learnable parameters, we have developed the Lightweight Decoupled Temporal Graph Network (LDTGN), an exceptionally efficient model for learning on dynamic graphs. LDTG was validated on various dynamic graph benchmarks, providing comparable or state-of-the-art results with significantly higher throughput than previous art. Notably, our method outperforms previous approaches by more than 20\% on benchmarks that require rapid model update rates, such as USLegis or UNTrade. The code to reproduce our experiments is available at \href{https://orfeld415.github.io/module-decoupling}{this http url}.
翻译:现代动态图学习方法已普遍采用批处理而非逐条更新的方式。批处理技术的应用使得这些方法能够适应数据流场景,其中图结构的更新以极高速率到达。然而,批处理机制迫使模型更新频率降低,从而导致其性能下降。本研究提出一种解耦策略,使模型能够在保持批处理的同时实现高频更新。通过解耦时序图网络的核心模块并以最少可学习参数实现,我们开发了轻量化解耦时序图网络(LDTGN)——一种用于动态图学习的超高效模型。LDTGN在多个动态图基准测试中得到验证,在保持与现有技术相当或更优性能的同时,实现了显著更高的吞吐量。值得注意的是,在需要快速模型更新速率的基准测试(如USLegis或UNTrade)中,本方法性能超越先前技术超过20%。实验复现代码可通过\href{https://orfeld415.github.io/module-decoupling}{此链接}获取。