We propose a new density estimation algorithm. Given $n$ i.i.d. observations from a distribution belonging to a class of densities on $\mathbb{R}^d$, our estimator outputs any density in the class whose ``perceptron discrepancy'' with the empirical distribution is at most $O(\sqrt{d/n})$. The perceptron discrepancy is defined as the largest difference in mass two distribution place on any halfspace. It is shown that this estimator achieves the expected total variation distance to the truth that is almost minimax optimal over the class of densities with bounded Sobolev norm and Gaussian mixtures. This suggests that the regularity of the prior distribution could be an explanation for the efficiency of the ubiquitous step in machine learning that replaces optimization over large function spaces with simpler parametric classes (such as discriminators of GANs). We also show that replacing the perceptron discrepancy with the generalized energy distance of Szekely and Rizzo (2013) further improves total variation loss. The generalized energy distance between empirical distributions is easily computable and differentiable, which makes it especially useful for fitting generative models. To the best of our knowledge, it is the first ``simple'' distance with such properties with minimax statistical guarantees.
翻译:我们提出了一种新的密度估计算法。给定来自$\mathbb{R}^d$上某密度类中分布的$n$个独立同分布观测值,我们的估计器可输出该类中与经验分布的“感知机差异”不超过$O(\sqrt{d/n})$的任何密度。感知机差异定义为两个分布在任何半空间上赋予的最大质量差。研究表明,该估计器在具有有界Sobolev范数和混合高斯分布的密度类上,能达到几乎极小极大最优的期望总变差距离。这表明,先验分布的正则性可能是解释机器学习中普遍存在的、用更简单的参数化类别(例如GAN的判别器)替代大型函数空间优化这一步骤的有效性原因。我们还证明,将感知机差异替换为Szekely和Rizzo(2013)提出的广义能量距离能进一步改善总变差损失。经验分布之间的广义能量距离易于计算和求导,这使其特别适用于生成模型的拟合。据我们所知,这是首个具有此类性质且具备极小极大统计保证的“简单”距离度量。