Large Language Models (LLMs) are often trained on vast amounts of undisclosed data, motivating the development of post-hoc Membership Inference Attacks (MIAs) to gain insight into their training data composition. However, in this paper, we identify inherent challenges in post-hoc MIA evaluation due to potential distribution shifts between collected member and non-member datasets. Using a simple bag-of-words classifier, we demonstrate that datasets used in recent post-hoc MIAs suffer from significant distribution shifts, in some cases achieving near-perfect distinction between members and non-members. This implies that previously reported high MIA performance may be largely attributable to these shifts rather than model memorization. We confirm that randomized, controlled setups eliminate such shifts and thus enable the development and fair evaluation of new MIAs. However, we note that such randomized setups are rarely available for the latest LLMs, making post-hoc data collection still required to infer membership for real-world LLMs. As a potential solution, we propose a Regression Discontinuity Design (RDD) approach for post-hoc data collection, which substantially mitigates distribution shifts. Evaluating various MIA methods on this RDD setup yields performance barely above random guessing, in stark contrast to previously reported results. Overall, our findings highlight the challenges in accurately measuring LLM memorization and the need for careful experimental design in (post-hoc) membership inference tasks.
翻译:大型语言模型(LLMs)通常在大量未公开数据上进行训练,这推动了后验成员推断攻击(MIAs)的发展,以深入了解其训练数据构成。然而,本文发现,由于收集的成员与非成员数据集之间可能存在分布偏移,后验MIA评估存在固有挑战。通过使用简单的词袋分类器,我们证明近期后验MIAs中使用的数据集存在显著分布偏移,在某些情况下甚至能近乎完美地区分成员与非成员数据。这意味着先前报道的高MIA性能可能主要归因于这些分布偏移,而非模型记忆。我们证实,随机化受控设置可消除此类偏移,从而支持新MIAs的开发和公平评估。但需注意,此类随机化设置对于最新LLMs往往难以获得,因此推断现实世界LLMs的成员身份仍需进行后验数据收集。作为潜在解决方案,我们提出采用断点回归设计(RDD)方法进行后验数据收集,该方法可显著缓解分布偏移。在此RDD设置下评估多种MIA方法,其性能仅略高于随机猜测,与先前报道的结果形成鲜明对比。总体而言,我们的研究结果凸显了准确衡量LLM记忆的挑战性,以及在(后验)成员推断任务中精心设计实验的必要性。