We consider the problem of sampling lattice field configurations on a lattice from the Boltzmann distribution corresponding to some action. Since such densities arise as approximationw of an underlying functional density, we frame the task as an instance of operator learning. We propose to approximate a time-dependent neural operator whose time integral provides a mapping between the functional distributions of the free and target theories. Once a particular lattice is chosen, the neural operator can be discretized to a finite-dimensional, time-dependent vector field which in turn induces a continuous normalizing flow between finite dimensional distributions over the chosen lattice. This flow can then be trained to be a diffeormorphism between the discretized free and target theories on the chosen lattice, and, by construction, can be evaluated on different discretizations of spacetime. We experimentally validate the proposal on the 2-dimensional $\phi^4$-theory to explore to what extent such operator-based flow architectures generalize to lattice sizes they were not trained on, and show that pretraining on smaller lattices can lead to a speedup over training directly on the target lattice size.
翻译:我们考虑从与某个作用量相对应的玻尔兹曼分布中采样格点场构型的问题。由于此类密度源于底层泛函密度的近似,我们将该任务构建为算子学习的一个实例。我们提出近似一个时间依赖的神经算子,其时间积分提供了自由理论与目标理论之间泛函分布的映射关系。一旦选定特定格点,该神经算子可离散化为有限维时间依赖向量场,进而在选定格点上诱导出有限维分布间的连续标准化流。该流可被训练为选定格点上离散化自由理论与目标理论间的微分同胚映射,并且通过构造可在时空的不同离散化方案上进行计算。我们通过二维$\phi^4$理论对提案进行实验验证,以探究此类基于算子的流架构在多大程度上能泛化到未经训练的格点尺寸,并证明在较小格点上预训练可加速目标格点尺寸的直接训练过程。