Generative Adversarial Networks (GANs) are popular and successful generative models. Despite their success, optimization is notoriously challenging. In this work, we explain the success and limitations of GANs by casting them as Bayesian neural networks with partial stochasticity. This interpretation allows us to establish conditions of universal approximation and to rewrite the adversarial-style optimization of several variants of GANs as the optimization of a proxy for the likelihood obtained by marginalizing out the stochastic variables. Following this interpretation, the need for regularization becomes apparent, and we propose to adopt strategies to smooth the loss landscape and methods to search for solutions with minimum description length, which are associated with flat minima and good generalization. Results obtained on a wide range of experiments indicate that these strategies lead to performance improvements and pave the way to a deeper understanding of GANs.
翻译:生成对抗网络(GANs)是流行且成功的生成模型。尽管取得了成功,其优化过程却以困难著称。在本研究中,我们通过将GANs解释为具有部分随机性的贝叶斯神经网络,从而阐释了其成功与局限性。这一解释使我们能够建立通用逼近的条件,并将多种GAN变体的对抗式优化重写为通过对随机变量进行边缘化所获得的似然代理的优化。基于这一解释,正则化的必要性变得显而易见。我们提出采用平滑损失曲面的策略以及寻找具有最小描述长度解的方法,这些方法与平坦极小值和良好的泛化性能相关联。在广泛实验中获得的结果表明,这些策略能够带来性能提升,并为更深入理解GANs铺平了道路。