The impressive performances of Large Language Models (LLMs) and their immense potential for commercialization have given rise to serious concerns over the Intellectual Property (IP) of their training data. In particular, the synthetic texts generated by LLMs may infringe the IP of the data being used to train the LLMs. To this end, it is imperative to be able to perform source attribution by identifying the data provider who contributed to the generation of a synthetic text by an LLM. In this paper, we show that this problem can be tackled by watermarking, i.e., by enabling an LLM to generate synthetic texts with embedded watermarks that contain information about their source(s). We identify the key properties of such watermarking frameworks (e.g., source attribution accuracy, robustness against adversaries), and propose a source attribution framework that satisfies these key properties due to our algorithmic designs. Our framework enables an LLM to learn an accurate mapping from the generated texts to data providers, which sets the foundation for effective source attribution. Extensive empirical evaluations show that our framework achieves effective source attribution.
翻译:大型语言模型(LLM)的卓越性能及其巨大的商业化潜力,引发了对其训练数据知识产权(IP)的严重关切。特别是,LLM生成的合成文本可能侵犯用于训练LLM的数据的知识产权。为此,必须能够通过识别为LLM生成合成文本做出贡献的数据提供者来进行来源归属。在本文中,我们证明该问题可以通过水印技术解决,即让LLM生成嵌入了包含其来源信息的水印的合成文本。我们识别了此类水印框架的关键属性(例如,来源归属准确性、对抗攻击的鲁棒性),并提出了一个由于我们的算法设计而满足这些关键属性的来源归属框架。我们的框架使LLM能够学习从生成文本到数据提供者的准确映射,这为有效的来源归属奠定了基础。大量的实证评估表明,我们的框架实现了有效的来源归属。