We propose a framework for solving evolution equations within parametric function classes, especially ones that are specified by neural networks. We call this framework the minimal neural evolution (MNE) because it is motivated by the goal of seeking the smallest instantaneous change in the neural network parameters that is compatible with exact solution of the evolution equation at a set of evolving collocation points. Formally, the MNE is quite similar to the recently introduced Neural Galerkin framework, but a difference in perspective motivates an alternative sketching procedure that effectively reduces the linear systems solved within the integrator to a size that is interpretable as an effective rank of the evolving neural tangent kernel, while maintaining a smooth evolution equation for the neural network parameters. We focus specifically on the application of this framework to diffusion processes, where the score function allows us to define intuitive dynamics for the collocation points. These can in turn be propagated jointly with the neural network parameters using a high-order adaptive integrator. In particular, we demonstrate how the Ornstein-Uhlenbeck diffusion process can be used for the task of sampling from a probability distribution given a formula for the density but no training data. This framework extends naturally to allow for conditional sampling and marginalization, and we show how to systematically remove the sampling bias due to parametric approximation error. We validate the efficiency, systematic improvability, and scalability of our approach on illustrative examples in low and high spatial dimensions.
翻译:我们提出了一种在参数化函数类(特别是由神经网络指定的函数类)中求解演化方程的框架。该框架被称为最小神经演化(MNE),其动机在于寻求神经网络参数的最小瞬时变化,该变化需与演化方程在一组动态配置点处的精确解相容。形式上,MNE与近期提出的神经伽辽金框架非常相似,但视角的差异促使我们采用一种替代的草图化过程,该过程能够将积分器内求解的线性系统有效缩减至一个可解释为演化神经正切核有效秩的规模,同时保持神经网络参数演化方程的光滑性。我们特别关注该框架在扩散过程中的应用,其中得分函数使我们能够为配置点定义直观的动力学。这些配置点进而可以与神经网络参数一起使用高阶自适应积分器进行联合传播。具体而言,我们展示了如何利用Ornstein-Uhlenbeck扩散过程,在给定密度公式但无训练数据的情况下,实现从概率分布中采样的任务。该框架可自然扩展以支持条件采样与边缘化,并展示了如何系统性地消除由参数化近似误差引起的采样偏差。我们在低维与高维空间中的示例性问题上验证了所提方法的效率、系统可改进性及可扩展性。