Large Language Models (LLMs) are increasingly used in a variety of applications, but concerns around membership inference have grown in parallel. Previous efforts focus on black-to-grey-box models, thus neglecting the potential benefit from internal LLM information. To address this, we propose the use of Linear Probes (LPs) as a method to detect Membership Inference Attacks (MIAs) by examining internal activations of LLMs. Our approach, dubbed LUMIA, applies LPs layer-by-layer to get fine-grained data on the model inner workings. We test this method across several model architectures, sizes and datasets, including unimodal and multimodal tasks. In unimodal MIA, LUMIA achieves an average gain of 15.71 % in Area Under the Curve (AUC) over previous techniques. Remarkably, LUMIA reaches AUC>60% in 65.33% of cases -- an increment of 46.80% against the state of the art. Furthermore, our approach reveals key insights, such as the model layers where MIAs are most detectable. In multimodal models, LPs indicate that visual inputs can significantly contribute to detect MIAs -- AUC>60% is reached in 85.90% of experiments.
翻译:大型语言模型(LLMs)在各类应用中的使用日益广泛,与此同时,关于成员推断的担忧也在不断增长。先前的研究主要关注黑盒到灰盒模型,因而忽视了利用LLM内部信息的潜在优势。为解决这一问题,我们提出使用线性探测器(LPs)作为检测成员推断攻击(MIAs)的方法,通过检查LLMs的内部激活状态来实现。我们的方法命名为LUMIA,它逐层应用线性探测器,以获取模型内部运作的细粒度数据。我们在多种模型架构、规模及数据集上测试了该方法,包括单模态和多模态任务。在单模态MIA检测中,LUMIA在曲线下面积(AUC)指标上相比现有技术平均提升了15.71%。值得注意的是,LUMIA在65.33%的案例中达到了AUC>60%——相较于现有最优方法提升了46.80%。此外,我们的方法揭示了关键洞察,例如MIA最易被检测出的模型层。在多模态模型中,线性探测器表明视觉输入能显著提升MIA检测能力——在85.90%的实验中达到了AUC>60%。