The widespread practice of indiscriminate data scraping to fine-tune language models (LMs) raises significant legal and ethical concerns, particularly regarding compliance with data protection laws such as the General Data Protection Regulation (GDPR). This practice often results in the unauthorized use of personal information, prompting growing debate within the academic and regulatory communities. Recent works have introduced the concept of generating unlearnable datasets (by adding imperceptible noise to the clean data), such that the underlying model achieves lower loss during training but fails to generalize to the unseen test setting. Though somewhat effective, these approaches are predominantly designed for images and are limited by several practical constraints like requiring knowledge of the target model. To this end, we introduce RegText, a framework that injects imperceptible spurious correlations into natural language datasets, effectively rendering them unlearnable without affecting semantic content. We demonstrate RegText's utility through rigorous empirical analysis of small and large LMs. Notably, RegText can restrict newer models like GPT-4o and Llama from learning on our generated data, resulting in a drop in their test accuracy compared to their zero-shot performance and paving the way for generating unlearnable text to protect public data.
翻译:为微调语言模型(LMs)而普遍存在的无差别数据抓取实践引发了重大的法律与伦理关切,尤其是在遵守《通用数据保护条例》(GDPR)等数据保护法规方面。这种做法常导致个人信息的未授权使用,在学术界和监管界引发了日益激烈的争论。近期研究引入了生成不可学习数据集的概念(通过对干净数据添加难以察觉的噪声),使得底层模型在训练期间损失较低,却无法泛化到未见过的测试场景。尽管这些方法具有一定效果,但它们主要针对图像设计,并受到若干实际限制,例如需要了解目标模型。为此,我们提出了RegText框架,该框架将难以察觉的虚假相关性注入自然语言数据集,在不影响语义内容的前提下有效使其不可学习。我们通过对小型及大型语言模型的严格实证分析,展示了RegText的效用。值得注意的是,RegText能够限制GPT-4o和Llama等新型模型在我们生成的数据上进行学习,导致其测试准确率相较于零样本性能下降,从而为生成不可学习文本以保护公共数据开辟了道路。