Large language models (LLMs) have a surprising failure: when trained on "A has a feature B", they do not generalize to "B is a feature of A", which is termed the Reversal Curse. Even when training with trillions of tokens this issue still appears due to Zipf's law - hence even if we train on the entire internet. This work proposes an alternative training scheme, called reverse training, whereby all words are used twice, doubling the amount of available tokens. The LLM is trained in both forward and reverse directions by reversing the training strings while preserving (i.e., not reversing) chosen substrings, such as entities. We show that data-matched reverse-trained models provide superior performance to standard models on standard tasks, and compute-matched reverse-trained models provide far superior performance on reversal tasks, helping resolve the reversal curse issue.
翻译:大型语言模型(LLMs)存在一个令人意外的缺陷:当训练数据包含“A具有特征B”时,模型无法泛化至“B是A的特征”,这被称为反转诅咒。即使使用数万亿个令牌进行训练,由于齐普夫定律,该问题依然存在——换言之,即便在互联网全量数据上训练也无法避免。本文提出一种名为“逆向训练”的替代训练方案,通过重复使用所有词汇使可用令牌数量翻倍。具体方法是在保留(即不反转)选定子字符串(如实体名称)的前提下,对训练字符串进行正向和反向双向训练。实验表明,数据量匹配的逆向训练模型在标准任务上优于传统模型,而计算量匹配的逆向训练模型在反转任务上表现显著提升,有助于解决反转诅咒问题。