The ever-increasing use of generative models in various fields where tabular data is used highlights the need for robust and standardized validation metrics to assess the similarity between real and synthetic data. Current methods lack a unified framework and rely on diverse and often inconclusive statistical measures. Divergences, which quantify discrepancies between data distributions, offer a promising avenue for validation. However, traditional approaches calculate divergences independently for each feature due to the complexity of joint distribution modeling. This paper addresses this challenge by proposing a novel approach that uses divergence estimation to overcome the limitations of marginal comparisons. Our core contribution lies in applying a divergence estimator to build a validation metric considering the joint distribution of real and synthetic data. We leverage a probabilistic classifier to approximate the density ratio between datasets, allowing the capture of complex relationships. We specifically calculate two divergences: the well-known Kullback-Leibler (KL) divergence and the Jensen-Shannon (JS) divergence. KL divergence offers an established use in the field, while JS divergence is symmetric and bounded, providing a reliable metric. The efficacy of this approach is demonstrated through a series of experiments with varying distribution complexities. The initial phase involves comparing estimated divergences with analytical solutions for simple distributions, setting a benchmark for accuracy. Finally, we validate our method on a real-world dataset and its corresponding synthetic counterpart, showcasing its effectiveness in practical applications. This research offers a significant contribution with applicability beyond tabular data and the potential to improve synthetic data validation in various fields.
翻译:在广泛使用表格数据的各个领域中,生成模型的应用日益增多,这凸显了对稳健且标准化的验证指标的需求,以评估真实数据与合成数据之间的相似性。当前方法缺乏统一框架,并依赖于多样且往往非结论性的统计度量。散度作为量化数据分布间差异的工具,为验证提供了有前景的途径。然而,由于联合分布建模的复杂性,传统方法仅针对每个特征独立计算散度。本文通过提出一种新颖方法来解决这一挑战,该方法利用散度估计来克服边际比较的局限性。我们的核心贡献在于应用散度估计器构建一个考虑真实与合成数据联合分布的验证指标。我们利用概率分类器来近似数据集间的密度比,从而能够捕捉复杂关系。我们具体计算两种散度:众所周知的Kullback-Leibler(KL)散度与Jensen-Shannon(JS)散度。KL散度在该领域已有成熟应用,而JS散度具有对称性和有界性,可提供可靠的度量。通过一系列具有不同分布复杂度的实验,证明了该方法的有效性。初始阶段涉及将估计散度与简单分布的解析解进行比较,为准确性设定基准。最后,我们在一个真实数据集及其对应的合成数据上验证了我们的方法,展示了其在实践应用中的有效性。这项研究提供了具有重要意义的贡献,其适用性超越表格数据,并有望推动各领域合成数据验证的改进。