Although membership inference attacks (MIAs) and machine-generated text detection target different goals, their methods often exploit similar signals based on a language model's probability distribution, and the two tasks have been studied independently. This can result in conclusions that overlook stronger methods and valuable insights from the other task. In this work, we theoretically and empirically demonstrate the transferability, i.e., how well a method originally developed for one task performs on the other, between MIAs and machine text detection. We prove that the metric achieving asymptotically optimal performance is identical for both tasks. We unify existing methods under this optimal metric and hypothesize that the accuracy with which a method approximates this metric is directly correlated with its transferability. Our large-scale empirical experiments demonstrate very strong rank correlation ($ρ\approx 0.7$) in cross-task performance. Notably, we also find that a machine text detector achieves the strongest performance among evaluated methods on both tasks, demonstrating the practical impact of transferability. To facilitate cross-task development and fair evaluation, we introduce MINT, a unified evaluation suite for MIAs and machine-generated text detection, implementing 15 recent methods from both tasks.
翻译:尽管成员推理攻击(MIAs)与机器生成文本检测针对不同目标,但二者的方法通常基于语言模型的概率分布利用相似信号,且这两项任务长期被独立研究。这可能导致研究结论忽视了来自另一任务的更强方法及宝贵洞见。本工作通过理论与实证研究,揭示了MIAs与机器文本检测之间的可迁移性——即针对某一任务开发的方法在另一任务上的表现效果。我们证明了在两个任务中达到渐近最优性能的度量指标具有同一性。基于此最优度量,我们统一了现有方法,并提出假设:方法逼近该度量指标的精度与其可迁移性直接相关。大规模实证实验显示出极强的跨任务性能秩相关性($ρ\approx 0.7$)。值得注意的是,我们发现机器文本检测器在所有评估方法中实现了两项任务的最强性能,这印证了可迁移性的实际影响。为促进跨任务开发与公平评估,我们推出了MINT——一个面向MIAs与机器生成文本检测的统一评估套件,实现了来自两个研究方向的15种前沿方法。