Synthetic data has garnered attention as a Privacy Enhancing Technology (PET) in sectors such as healthcare and finance. When using synthetic data in practical applications, it is important to provide protection guarantees. In the literature, two family of approaches are proposed for tabular data: on the one hand, Similarity-based methods aim at finding the level of similarity between training and synthetic data. Indeed, a privacy breach can occur if the generated data is consistently too similar or even identical to the train data. On the other hand, Attack-based methods conduce deliberate attacks on synthetic datasets. The success rates of these attacks reveal how secure the synthetic datasets are. In this paper, we introduce a contrastive method that improves privacy assessment of synthetic datasets by embedding the data in a more representative space. This overcomes obstacles surrounding the multitude of data types and attributes. It also makes the use of intuitive distance metrics possible for similarity measurements and as an attack vector. In a series of experiments with publicly available datasets, we compare the performances of similarity-based and attack-based methods, both with and without use of the contrastive learning-based embeddings. Our results show that relatively efficient, easy to implement privacy metrics can perform equally well as more advanced metrics explicitly modeling conditions for privacy referred to by the GDPR.
翻译:合成数据作为隐私增强技术(PET)在医疗和金融等领域受到关注。在实际应用中使用合成数据时,提供隐私保护保证至关重要。文献中针对表格数据提出了两类方法:一方面,基于相似性的方法旨在衡量训练数据与合成数据之间的相似程度。实际上,若生成数据与训练数据过于相似甚至完全一致,则可能导致隐私泄露。另一方面,基于攻击的方法对合成数据集实施模拟攻击,其成功率揭示了合成数据集的安全程度。本文提出一种对比学习方法,通过将数据嵌入更具表征能力的空间来改进合成数据集的隐私评估。该方法克服了数据类型和属性多样性带来的障碍,并使直观的距离度量能够用于相似性测量和攻击向量构建。在一系列公开数据集的实验中,我们比较了基于相似性和基于攻击的方法在使用与不使用对比学习嵌入时的性能。结果表明,相对高效且易于实现的隐私度量方法能够达到与更复杂的、明确建模GDPR隐私条件的度量方法相当的效果。