Small language models (SLMs) are increasingly valued for their efficiency and deployability in resource-constrained environments, making them useful for on-device, privacy-sensitive, and edge computing applications. On the other hand, membership inference attacks (MIAs), which aim to determine whether a given sample was used in a model's training, are an important threat with serious privacy and intellectual property implications. In this paper, we study MIAs on SLMs. Although MIAs were shown to be effective on large language models (LLMs), they are relatively less studied on emerging SLMs, and furthermore, their effectiveness decreases as models get smaller. Motivated by this finding, we propose a new MIA called win-k, which builds on top of a state-of-the-art attack (min-k). We experimentally evaluate win-k by comparing it with five existing MIAs using three datasets and eight SLMs. Results show that win-k outperforms existing MIAs in terms of AUROC, TPR @ 1% FPR, and FPR @ 99% TPR metrics, especially on smaller models.
翻译:小型语言模型因其在资源受限环境中的高效性和可部署性日益受到重视,使其适用于设备端、隐私敏感及边缘计算等应用场景。另一方面,成员推理攻击旨在判断给定样本是否被用于模型训练,这种攻击构成重大威胁,可能引发严重的隐私及知识产权问题。本文针对小型语言模型开展成员推理攻击研究。尽管现有研究表明成员推理攻击在大型语言模型上效果显著,但针对新兴小型语言模型的相关研究相对不足,且攻击效果随模型规模减小而降低。基于此发现,我们提出名为win-k的新型攻击方法,该方法建立在最先进的min-k攻击基础之上。我们通过三个数据集和八个小型语言模型,将win-k与五种现有攻击方法进行实验对比评估。结果显示,在AUROC、1% FPR下的TPR以及99% TPR下的FPR等指标上,win-k均优于现有攻击方法,尤其在更小规模的模型上表现更为突出。