The ever-increasing use of generative models in various fields where tabular data is used highlights the need for robust and standardized validation metrics to assess the similarity between real and synthetic data. Current methods lack a unified framework and rely on diverse and often inconclusive statistical measures. Divergences, which quantify discrepancies between data distributions, offer a promising avenue for validation. However, traditional approaches calculate divergences independently for each feature due to the complexity of joint distribution modeling. This paper addresses this challenge by proposing a novel approach that uses divergence estimation to overcome the limitations of marginal comparisons. Our core contribution lies in applying a divergence estimator to build a validation metric considering the joint distribution of real and synthetic data. We leverage a probabilistic classifier to approximate the density ratio between datasets, allowing the capture of complex relationships. We specifically calculate two divergences: the well-known Kullback-Leibler (KL) divergence and the Jensen-Shannon (JS) divergence. KL divergence offers an established use in the field, while JS divergence is symmetric and bounded, providing a reliable metric. The efficacy of this approach is demonstrated through a series of experiments with varying distribution complexities. The initial phase involves comparing estimated divergences with analytical solutions for simple distributions, setting a benchmark for accuracy. Finally, we validate our method on a real-world dataset and its corresponding synthetic counterpart, showcasing its effectiveness in practical applications. This research offers a significant contribution with applicability beyond tabular data and the potential to improve synthetic data validation in various fields.
翻译:随着表格数据在多个领域中生成模型的使用日益增加,迫切需要稳健且标准化的验证指标来评估真实数据与合成数据之间的相似性。当前方法缺乏统一框架,依赖于多种且常不具决定性的统计度量。散度作为一种量化数据分布差异的方法,为验证提供了有前景的途径。然而,由于联合分布建模的复杂性,传统方法仅独立计算每个特征的散度。本文通过提出一种利用散度估计来克服边际比较局限性的新方法,解决了这一挑战。我们的核心贡献在于应用散度估计器构建考虑真实与合成数据联合分布的验证指标。我们利用概率分类器近似数据集之间的密度比,从而捕捉复杂关系。具体计算了两种散度:广为人知的Kullback-Leibler(KL)散度和Jensen-Shannon(JS)散度。KL散度在该领域已有成熟应用,而JS散度具有对称性和有界性,提供了可靠的度量标准。通过一系列不同分布复杂度的实验,证明了该方法的有效性。初始阶段将估计的散度与简单分布的解析解进行比较,为准确性设定了基准。最后,我们在真实数据集及其对应合成数据集上验证了该方法,展示了其实践应用效果。本研究不仅对表格数据具有重要贡献,还具备超越表格数据的适用性,有望改进多个领域的合成数据验证工作。