Ensuring safety, factuality and overall quality in the generations of large language models is a critical challenge, especially as these models are increasingly deployed in real-world applications. The prevailing approach to addressing these issues involves collecting expensive, carefully curated datasets and applying multiple stages of fine-tuning and alignment. However, even this complex pipeline cannot guarantee the correction of patterns learned during pretraining. Therefore, addressing these issues during pretraining is crucial, as it shapes a model's core behaviors and prevents unsafe or hallucinated outputs from becoming deeply embedded. To tackle this issue, we introduce a new pretraining method that streams documents and uses reinforcement learning (RL) to improve the next K generated tokens at each step. A strong, post-trained model judges candidate generations -- including model rollouts, the original suffix, and a rewritten suffix -- for quality, safety, and factuality. Early in training, the process relies on the original and rewritten suffixes; as the model improves, RL rewards high-quality rollouts. This approach builds higher quality, safer, and more factual models from the ground up. In experiments, our method gives 36.2% and 18.5% relative improvements over standard pretraining in terms of factuality and safety, and up to 86.3% win rate improvements in overall generation quality.
翻译:确保大型语言模型生成内容的安全性、事实准确性及整体质量是一项关键挑战,尤其在模型日益广泛部署于实际应用的背景下。当前解决这些问题的主流方法依赖于收集昂贵且精心策划的数据集,并应用多阶段的微调与对齐流程。然而,即使采用如此复杂的处理流程,仍无法保证纠正模型在预训练阶段习得的模式。因此,在预训练阶段解决这些问题至关重要,因为这一阶段塑造了模型的核心行为模式,并能防止不安全或虚构的输出被深度固化。为应对此挑战,本文提出一种新的预训练方法:该方法通过流式处理文档,并利用强化学习在每一步优化后续K个生成标记。一个经过充分后训练的强模型负责评估候选生成结果——包括模型自生成序列、原始后缀及重写后缀——在质量、安全性和事实性方面的表现。在训练初期,该过程主要依赖原始后缀与重写后缀;随着模型性能提升,强化学习机制将对高质量的自生成序列给予奖励。这种方法从底层构建了质量更高、更安全且更符合事实的模型。实验结果表明,相较于标准预训练方法,本方法在事实性与安全性方面分别实现了36.2%与18.5%的相对提升,在整体生成质量上最高可获得86.3%的胜率改进。