How novel are texts generated by language models (LMs) relative to their training corpora? In this work, we investigate the extent to which modern LMs generate $n$-grams from their training data, evaluating both (i) the probability LMs assign to complete training $n$-grams and (ii) $n$-novelty, the proportion of $n$-grams generated by an LM that did not appear in the training data (for arbitrarily large $n$). To enable arbitrary-length $n$-gram search over a corpus in constant time, we develop Rusty-DAWG, a novel search tool inspired by indexing of genomic data. We compare the novelty of LM-generated text to human-written text and explore factors that affect generation novelty, focusing on the Pythia models. We find that, for $n > 4$, LM-generated text is less novel than human-written text, though it is more novel for smaller $n$. Larger LMs and more constrained decoding strategies both decrease novelty. Finally, we show that LMs complete $n$-grams with lower loss if they are more frequent in the training data. Overall, our results reveal factors influencing the novelty of LM-generated text, and we release Rusty-DAWG to facilitate further pretraining data research.
翻译:语言模型(LMs)生成的文本相对于其训练语料库有多新颖?在本工作中,我们研究了现代LMs在多大程度上生成其训练数据中的$n$-元语法,同时评估了(i)LMs分配给完整训练$n$-元语法的概率,以及(ii)$n$-新颖性——即LM生成的、未在训练数据中出现的$n$-元语法的比例(对于任意大的$n$)。为了实现常数时间内对语料库进行任意长度$n$-元语法搜索,我们开发了Rusty-DAWG,这是一种受基因组数据索引启发的新型搜索工具。我们将LM生成文本的新颖性与人类撰写文本进行比较,并探讨影响生成新颖性的因素,重点关注Pythia模型。我们发现,对于$n > 4$,LM生成的文本比人类撰写的文本新颖性更低,尽管对于较小的$n$,其新颖性更高。更大的LMs和更受限的解码策略都会降低新颖性。最后,我们证明,如果$n$-元语法在训练数据中出现频率更高,LMs以更低的损失完成它们。总体而言,我们的结果揭示了影响LM生成文本新颖性的因素,并且我们发布了Rusty-DAWG以促进进一步的预训练数据研究。