We introduce an analytical framework to quantify the changes in a machine learning algorithm's output distribution following the inclusion of a few data points in its training set, a notion we define as leave-one-out distinguishability (LOOD). This is key to measuring data **memorization** and information **leakage** as well as the **influence** of training data points in machine learning. We illustrate how our method broadens and refines existing empirical measures of memorization and privacy risks associated with training data. We use Gaussian processes to model the randomness of machine learning algorithms, and validate LOOD with extensive empirical analysis of leakage using membership inference attacks. Our analytical framework enables us to investigate the causes of leakage and where the leakage is high. For example, we analyze the influence of activation functions, on data memorization. Additionally, our method allows us to identify queries that disclose the most information about the training data in the leave-one-out setting. We illustrate how optimal queries can be used for accurate **reconstruction** of training data.
翻译:我们提出了一种分析框架,用于量化机器学习算法在训练集中加入少量数据点后其输出分布的变化,我们将其定义为留一可区分性(leave-one-out distinguishability, LOOD)。这一概念对于衡量机器学习中数据的**记忆化**、信息**泄露**以及训练数据点的**影响**至关重要。我们展示了该方法如何拓展并完善了现有的关于记忆化与训练数据隐私风险的实证测量。利用高斯过程对机器学习算法的随机性进行建模,并通过成员推断攻击对LOOD进行广泛实证分析来验证其有效性。该分析框架使我们能够探究泄露的成因及其高发区域。例如,我们分析了激活函数对数据记忆化的影响。此外,我们的方法还能识别出在留一场景下泄露训练数据信息最多的查询。我们展示了最优查询如何用于训练数据的精确**重建**。