Transformer language models typically operate with a fixed-length context window, which has grown in step with large-scale pretraining datasets. In the BabyLM Challenge, however, many past submissions have defaulted to using much shorter sequence lengths. We examine the impact of sequence length on BabyLM pretraining, to answer the simple question: what sequence length should we be using when training Baby LMs? Using 100M-word training data and fixed compute budgets, we compare 125M-parameter Mamba and OPT models, finding that although longer is often better, the optimal length depends on both task and architecture. Shorter sequences are sufficient for grammatical generalization tasks whereas longer contexts benefit morphological analogical reasoning tasks.
翻译:Transformer 语言模型通常以固定长度的上下文窗口运行,该窗口长度已随着大规模预训练数据集的发展而逐步增加。然而,在 BabyLM 挑战赛中,许多过往提交的模型默认使用了短得多的序列长度。我们研究了序列长度对 BabyLM 预训练的影响,以回答一个简单的问题:在训练 Baby LM 时,我们应该使用多长的序列?使用 1 亿词的训练数据和固定的计算预算,我们比较了 1.25 亿参数的 Mamba 和 OPT 模型,发现虽然更长的序列通常更好,但最佳长度取决于具体任务和模型架构。对于语法泛化任务,较短的序列已足够;而对于形态类比推理任务,更长的上下文则更有益处。