We derive the divergence-kernel formula for the linear response of random dynamical systems. Specifically, the pathwise expression is for the parameter-derivative of the marginal or stationary density, not an averaged observable. Our formula works for multiplicative and parameterized noise over any period of time; it does not require hyperbolicity. Then we derive a Monte-Carlo algorithm for linear responses. We develop a new framework of generative models, DK-SDE, where the model is a parameterized SDE, that (1) directly uses the KL divergence between the empirical data distribution and the marginal density of the SDE as the training objective, and (2) accommodates parametrizations in both drift and diffusion over a long time span, allowing prior structural knowledge to be incorporated explicitly. The optimization is done by gradient-descent enabled by the divergence-kernel method, which involves only forward processes and therefore substantially reduces memory cost. We demonstrate the new model on a 20-dimensional Lorenz system.
翻译:我们推导了随机动力系统线性响应的散度核公式。具体而言,该路径表达式针对的是边际或稳态密度对参数的导数,而非平均可观测量。我们的公式适用于任意时间段内具有乘性及参数化噪声的情形,且不要求系统满足双曲性。随后,我们推导了线性响应的蒙特卡洛算法。我们提出了一种新的生成模型框架DK-SDE,该模型以参数化随机微分方程为基础,其特点在于:(1) 直接使用经验数据分布与SDE边际密度之间的KL散度作为训练目标;(2) 能够在长时间跨度内对漂移项和扩散项同时进行参数化,从而允许显式地融入先验结构知识。优化过程通过散度核方法实现的梯度下降完成,该方法仅涉及前向过程,因此显著降低了内存开销。我们在一个20维Lorenz系统上验证了新模型的有效性。