We show that continual pretraining on plausible misinformation can overwrite specific factual knowledge in large language models without degrading overall performance. Unlike prior poisoning work under static pretraining, we study repeated exposure to counterfactual claims during continual updates. Using paired fact-counterfact items with graded poisoning ratios, we track how internal preferences between competing facts evolve across checkpoints, layers, and model scales. Even moderate poisoning (50-100%) flips over 55% of responses from correct to counterfactual while leaving ambiguity nearly unchanged. These belief flips emerge abruptly, concentrate in late layers (e.g., Layers 29-36 in 3B models), and are partially reversible via patching (up to 56.8%). The corrupted beliefs generalize beyond poisoned prompts, selectively degrading commonsense reasoning while leaving alignment benchmarks largely intact and transferring imperfectly across languages. These results expose a failure mode of continual pre-training in which targeted misinformation replaces internal factual representations without triggering broad performance collapse, motivating representation-level monitoring of factual integrity during model updates.
翻译:我们证明,对看似可信的错误信息进行持续预训练,可以在不损害整体性能的情况下,覆盖大型语言模型中的特定事实知识。与先前在静态预训练下的中毒研究不同,我们探究了在持续更新过程中反复接触反事实主张的影响。通过使用成对的事实-反事实条目及分级中毒比例,我们追踪了在检查点、层级和模型规模之间,模型对竞争事实的内部偏好如何演变。即使是中等程度的中毒(50-100%),也会使超过55%的回应从正确翻转为反事实,而模糊性几乎保持不变。这些信念翻转会突然出现,并集中在后期层级(例如,30亿参数模型的第29至36层),并且可以通过修补部分逆转(最高达56.8%)。被破坏的信念会泛化到中毒提示之外,选择性地损害常识推理能力,同时使对齐基准测试大体不受影响,并且在不同语言间的转移并不完善。这些结果揭示了持续预训练的一种失效模式:有针对性的错误信息会替换内部的事实表征,而不会引发广泛的性能崩溃,这促使我们在模型更新过程中,需要在表征层面监控事实完整性。