Large language models have gained widespread attention recently, but their potential security vulnerabilities, especially privacy leakage, are also becoming apparent. To test and evaluate for data extraction risks in LLM, we proposed CoSPED, short for Consistent Soft Prompt targeted data Extraction and Defense. We introduce several innovative components, including Dynamic Loss, Additive Loss, Common Loss, and Self Consistency Decoding Strategy, and tested to enhance the consistency of the soft prompt tuning process. Through extensive experimentation with various combinations, we achieved an extraction rate of 65.2% at a 50-token prefix comparison. Our comparisons of CoSPED with other reference works confirm our superior extraction rates. We evaluate CoSPED on more scenarios, achieving Pythia model extraction rate of 51.7% and introducing cross-model comparison. Finally, we explore defense through Rank-One Model Editing and achieve a reduction in the extraction rate to 1.6%, which proves that our analysis of extraction mechanisms can directly inform effective mitigation strategies against soft prompt-based attacks.
翻译:大型语言模型近来受到广泛关注,但其潜在的安全漏洞,尤其是隐私泄露问题,也日益凸显。为测试和评估大型语言模型中的数据提取风险,我们提出了CoSPED(一致性软提示目标数据提取与防御)。我们引入了多项创新组件,包括动态损失、加性损失、公共损失以及自一致性解码策略,并通过测试以增强软提示调优过程的一致性。通过多种组合的广泛实验,我们在50个词元前缀比较中实现了65.2%的提取率。我们将CoSPED与其他参考工作进行比较,证实了其更优的提取率。我们在更多场景下评估CoSPED,实现了Pythia模型51.7%的提取率,并引入了跨模型比较。最后,我们通过秩一模型编辑探索防御方法,成功将提取率降低至1.6%,这证明我们对提取机制的分析能够直接指导针对基于软提示攻击的有效缓解策略。