The promise of tabular generative models is to produce realistic synthetic data that can be shared and safely used without dangerous leakage of information from the training set. In evaluating these models, a variety of methods have been proposed to measure the tendency to copy data from the training dataset when generating a sample. However, these methods suffer from either not considering data-copying from a privacy threat perspective, not being motivated by recent results in the data-copying literature or being difficult to make compatible with the high dimensional, mixed type nature of tabular data. This paper proposes a new similarity metric and Membership Inference Attack called Data Plagiarism Index (DPI) for tabular data. We show that DPI evaluates a new intuitive definition of data-copying and characterizes the corresponding privacy risk. We show that the data-copying identified by DPI poses both privacy and fairness threats to common, high performing architectures; underscoring the necessity for more sophisticated generative modeling techniques to mitigate this issue.
翻译:表格生成模型的承诺在于生成逼真的合成数据,这些数据可以安全共享和使用,而不会从训练集中泄露敏感信息。在评估这些模型时,已有多种方法被提出来衡量生成样本时从训练数据集复制数据的倾向。然而,这些方法要么未能从隐私威胁视角考量数据复制问题,要么缺乏对数据复制领域最新研究成果的动机支撑,要么难以适应表格数据高维度、混合类型的特点。本文提出了一种新的相似性度量方法及成员推理攻击——数据抄袭指数,专用于表格数据。我们证明,DPI评估了一种新的、直观的数据复制定义,并刻画了相应的隐私风险。研究表明,DPI识别的数据复制对常见高性能架构构成了隐私与公平性双重威胁,这凸显了需要更先进的生成建模技术来缓解此问题的必要性。