How novel are texts generated by language models (LMs) relative to their training corpora? In this work, we investigate the extent to which modern LMs generate $n$-grams from their training data, evaluating both (i) the probability LMs assign to complete training $n$-grams and (ii) $n$-novelty, the proportion of $n$-grams generated by an LM that did not appear in the training data (for arbitrarily large $n$). To enable arbitrary-length $n$-gram search over a corpus in constant time w.r.t. corpus size, we develop Rusty-DAWG, a novel search tool inspired by indexing of genomic data. We compare the novelty of LM-generated text to human-written text and explore factors that affect generation novelty, focusing on the Pythia models. We find that, for $n > 4$, LM-generated text is less novel than human-written text, though it is more novel for smaller $n$. Larger LMs and more constrained decoding strategies both decrease novelty. Finally, we show that LMs complete $n$-grams with lower loss if they are more frequent in the training data. Overall, our results reveal factors influencing the novelty of LM-generated text, and we release Rusty-DAWG to facilitate further pretraining data research.
翻译:语言模型(LMs)生成的文本相对于其训练语料库有多新颖?在本工作中,我们研究了现代语言模型在多大程度上生成其训练数据中的$n$-gram,同时评估了(i)语言模型分配给完整训练$n$-gram的概率,以及(ii)$n$-新颖性——即语言模型生成的、未在训练数据中出现的$n$-gram比例(适用于任意大的$n$)。为了能在恒定时间内(相对于语料库大小)实现对语料库中任意长度$n$-gram的搜索,我们开发了Rusty-DAWG,这是一种受基因组数据索引启发的新型搜索工具。我们将语言模型生成文本的新颖性与人类撰写文本进行比较,并探讨影响生成新颖性的因素,重点关注Pythia模型。我们发现,当$n > 4$时,语言模型生成的文本比人类撰写的文本新颖性更低,尽管在较小的$n$值时其新颖性更高。更大的语言模型和更受限的解码策略都会降低新颖性。最后,我们证明,如果$n$-gram在训练数据中出现频率更高,语言模型以更低的损失完成这些$n$-gram。总体而言,我们的结果揭示了影响语言模型生成文本新颖性的因素,并发布Rusty-DAWG以促进进一步的预训练数据研究。