We propose a deep neural network architecture designed such that its output forms an invertible symplectomorphism of the input. This design draws an analogy to the real-valued non-volume-preserving (real NVP) method used in normalizing flow techniques. Utilizing this neural network type allows for learning tasks on unknown Hamiltonian systems without breaking the inherent symplectic structure of the phase space.
翻译:我们提出了一种深度神经网络架构,其设计使得输出构成输入的可逆辛同胚。该设计借鉴了归一化流技术中使用的实值非体积保持(real NVP)方法。利用此类神经网络,可以在不破坏相空间固有辛结构的前提下,对未知哈密顿系统进行学习任务。