This short note is written for rapid communication of long context training and to share the idea of how to train it with low memory usage. In the note, we generalize the attention algorithm and neural network of Generative Pre-Trained Transformers and reinterpret it in Path integral formalism. First, the role of the transformer is understood as the time evolution of the token state and second, it is suggested that the all key-token states in the same time as the query-token can attend to the attention with the query token states. As a result of the repetitive time evolution, it is discussed that the token states in the past sequence meats the token states in the present sequence so that the attention between separated sequences becomes possible for maintaining infinite contextual information just by using low memory for limited size of sequence. For the experiment, the $12$ input token window size was taken and one GPU with $24$GB memory was used for the pre-training. It was confirmed that more than $150$ length context is preserved. The sampling result of the training, the code and the other details will be included in the revised version of this note later.
翻译:本文简讯旨在快速交流长上下文训练方法,并分享如何以低内存消耗实现该训练的思路。本文通过推广生成式预训练Transformer的注意力算法与神经网络结构,并在路径积分形式下对其重新诠释。首先,将Transformer的作用理解为令牌状态的时间演化;其次,提出所有与查询令牌处于同一时刻的键令牌状态均可与查询令牌状态进行注意力交互。通过重复的时间演化过程,论证了历史序列中的令牌状态可与当前序列中的令牌状态相遇,从而使得分隔序列间的注意力成为可能——仅需使用有限序列大小的低内存即可维持无限上下文信息。实验采用12个输入令牌窗口大小,使用单块24GB显存GPU进行预训练,证实可保留超过150长度的上下文。训练采样结果、代码及其他细节将在后续修订版中补充。