We propose Hellinger-type loss functions for training Generative Adversarial Networks (GANs), motivated by the boundedness, symmetry, and robustness properties of the Hellinger distance. We define an adversarial objective based on this divergence and study its statistical properties within a general parametric framework. We establish the existence, uniqueness, consistency, and joint asymptotic normality of the estimators obtained from the adversarial training procedure. In particular, we analyze the joint estimation of both generator and discriminator parameters, offering a comprehensive asymptotic characterization of the resulting estimators. We introduce two implementations of the Hellinger-type loss and we evaluate their empirical behavior in comparison with the classic (Maximum Likelihood-type) GAN loss. Through a controlled simulation study, we demonstrate that both proposed losses yield improved estimation accuracy and robustness under increasing levels of data contamination.
翻译:本文提出了一种基于Hellinger距离的损失函数用于训练生成对抗网络(GANs),其动机源于Hellinger距离的有界性、对称性和鲁棒性特性。我们基于该散度定义了一个对抗性目标函数,并在一般参数化框架下研究了其统计性质。我们证明了通过对抗训练过程所获得的估计量的存在性、唯一性、一致性以及联合渐近正态性。特别地,我们分析了生成器和判别器参数的联合估计,为所得估计量提供了全面的渐近特征描述。我们引入了两种Hellinger型损失函数的实现方式,并通过与经典(最大似然型)GAN损失函数的对比评估了它们的实证表现。通过一项受控的模拟研究,我们证明了在数据污染水平逐渐增加的情况下,所提出的两种损失函数均能提高估计精度和鲁棒性。