Dominant pre-trained language models (PLMs) have been successful in high-quality natural language generation. However, the analysis of their generation is not mature: do they acquire generalizable linguistic abstractions, or do they simply memorize and recover substrings of the training data? Especially, few studies focus on domain-specific PLM. In this study, we pre-trained domain-specific GPT-2 models using a limited corpus of Japanese newspaper articles and quantified memorization of training data by comparing them with general Japanese GPT-2 models. Our experiments revealed that domain-specific PLMs sometimes "copy and paste" on a large scale. Furthermore, we replicated the empirical finding that memorization is related to duplication, model size, and prompt length, in Japanese the same as in previous English studies. Our evaluations are relieved from data contamination concerns by focusing on newspaper paywalls, which prevent their use as training data. We hope that our paper encourages a sound discussion such as the security and copyright of PLMs.
翻译:主流的预训练语言模型在高质量自然语言生成方面取得了成功。然而,对其生成机制的分析尚不成熟:它们究竟是获取了可泛化的语言抽象,还是仅仅记忆并复现训练数据中的子字符串?尤其是,针对特定领域预训练语言模型的研究很少。在本研究中,我们使用有限的日本报纸文章语料库预训练了领域特定的GPT-2模型,并通过与通用日语GPT-2模型进行比较,量化了对训练数据的记忆化程度。实验表明,特定领域的预训练语言模型有时会大规模地“复制粘贴”。此外,我们复现了之前的实证发现:记忆化与重复、模型规模和提示长度相关,这一现象在日语中与先前英语研究一致。我们的评估通过聚焦报纸付费墙(防止其被用作训练数据)而免除了数据污染的担忧。希望本文能促进关于预训练语言模型安全性和版权等问题的理性讨论。