Neural ordinary differential equations (neural ODEs) can effectively learn dynamical systems from time series data, but their behavior on graph-structured data remains poorly understood, especially when applied to graphs with different size or structure than encountered during training. We study neural ODEs ($\mathtt{nODE}$s) with vector fields following the Barabási-Barzel form, trained on synthetic data from five common dynamical systems on graphs. Using the $\mathbb{S}^1$-model to generate graphs with realistic and tunable structure, we find that degree heterogeneity and the type of dynamical system are the primary factors in determining $\mathtt{nODE}$s' ability to generalize across graph sizes and properties. This extends to $\mathtt{nODE}$s' ability to capture fixed points and maintain performance amid missing data. Average clustering plays a secondary role in determining $\mathtt{nODE}$ performance. Our findings highlight $\mathtt{nODE}$s as a powerful approach to understanding complex systems but underscore challenges emerging from degree heterogeneity and clustering in realistic graphs.
翻译:神经常微分方程(neural ODEs)能够有效地从时间序列数据中学习动力系统,但其在图结构数据上的行为仍不甚明确,尤其是在应用于与训练时遇到的图具有不同规模或结构的情况下。我们研究了具有遵循Barabási-Barzel形式向量场的神经常微分方程($\mathtt{nODE}$s),并在来自五种常见图上动力系统的合成数据上进行了训练。利用$\mathbb{S}^1$模型生成具有现实且结构可调的图,我们发现度异质性和动力系统的类型是决定$\mathtt{nODE}$s跨图规模与属性泛化能力的主要因素。这延伸至$\mathtt{nODE}$s捕捉不动点以及在数据缺失情况下保持性能的能力。平均聚类系数在决定$\mathtt{nODE}$性能方面起次要作用。我们的研究结果凸显了$\mathtt{nODE}$s作为理解复杂系统的一种有力方法,但也强调了现实图中由度异质性和聚类所带来的挑战。