Deep generative models aim to learn the underlying distribution of data and generate new ones. Despite the diversity of generative models and their high-quality generation performance in practice, most of them lack rigorous theoretical convergence proofs. In this work, we aim to establish some convergence results for OT-Flow, one of the deep generative models. First, by reformulating the framework of OT-Flow model, we establish the $\Gamma$-convergence of the formulation of OT-flow to the corresponding optimal transport (OT) problem as the regularization term parameter $\alpha$ goes to infinity. Second, since the loss function will be approximated by Monte Carlo method in training, we established the convergence between the discrete loss function and the continuous one when the sample number $N$ goes to infinity as well. Meanwhile, the approximation capability of the neural network provides an upper bound for the discrete loss function of the minimizers. The proofs in both aspects provide convincing assurances for OT-Flow.
翻译:深度生成模型旨在学习数据的潜在分布并生成新样本。尽管生成模型种类繁多且在实际应用中展现出高质量的生成性能,但大多数模型缺乏严格的理论收敛性证明。本研究旨在为深度生成模型之一——OT-Flow建立收敛性结论。首先,通过重构OT-Flow模型框架,我们证明了当正则化项参数α趋近于无穷大时,OT-Flow的公式化表述Γ-收敛于相应的最优传输问题。其次,由于训练过程中损失函数会通过蒙特卡洛方法进行近似,我们建立了当样本数量N趋近于无穷大时离散损失函数与连续损失函数之间的收敛性。同时,神经网络的逼近能力为极小化器的离散损失函数提供了上界。这两方面的证明为OT-Flow提供了令人信服的保障。