Graph generative modelling has become an essential task due to the wide range of applications in chemistry, biology, social networks, and knowledge representation. In this work, we propose a novel framework for generating graphs by adapting the Generator Matching (arXiv:2410.20587) paradigm to graph-structured data. We leverage the graph Laplacian and its associated heat kernel to define a continous-time diffusion on each graph. The Laplacian serves as the infinitesimal generator of this diffusion, and its heat kernel provides a family of conditional perturbations of the initial graph. A neural network is trained to match this generator by minimising a Bregman divergence between the true generator and a learnable surrogate. Once trained, the surrogate generator is used to simulate a time-reversed diffusion process to sample new graph structures. Our framework unifies and generalises existing diffusion-based graph generative models, injecting domain-specific inductive bias via the Laplacian, while retaining the flexibility of neural approximators. Experimental studies demonstrate that our approach captures structural properties of real and synthetic graphs effectively.
翻译:图生成建模因其在化学、生物学、社交网络和知识表示等领域的广泛应用而成为一项关键任务。本研究提出了一种新颖的图生成框架,通过将生成器匹配范式(arXiv:2410.20587)适配于图结构数据。我们利用图拉普拉斯算子及其关联的热核来定义每个图上的连续时间扩散过程。拉普拉斯算子作为该扩散的无穷小生成元,其热核提供了一系列对初始图的条件扰动。通过最小化真实生成器与可学习代理生成器之间的布雷格曼散度,训练神经网络以匹配该生成器。训练完成后,代理生成器被用于模拟时间反转的扩散过程以采样新的图结构。我们的框架统一并推广了现有的基于扩散的图生成模型,通过拉普拉斯算子注入领域特定的归纳偏置,同时保留神经近似器的灵活性。实验研究表明,我们的方法能有效捕捉真实图与合成图的结构特性。