We introduce the Byte Latent Transformer (BLT), a new byte-level LLM architecture that, for the first time, matches tokenization-based LLM performance at scale with significant improvements in inference efficiency and robustness. BLT encodes bytes into dynamically sized patches, which serve as the primary units of computation. Patches are segmented based on the entropy of the next byte, allocating more compute and model capacity where increased data complexity demands it. We present the first FLOP controlled scaling study of byte-level models up to 8B parameters and 4T training bytes. Our results demonstrate the feasibility of scaling models trained on raw bytes without a fixed vocabulary. Both training and inference efficiency improve due to dynamically selecting long patches when data is predictable, along with qualitative improvements on reasoning and long tail generalization. Overall, for fixed inference costs, BLT shows significantly better scaling than tokenization-based models, by simultaneously growing both patch and model size.
翻译:我们提出了字节潜在Transformer(BLT),这是一种新的字节级大语言模型架构,首次在大规模上达到了基于词元化的大语言模型的性能,同时在推理效率和鲁棒性方面有显著提升。BLT将字节编码为动态大小的补丁,作为计算的基本单元。补丁的划分基于下一个字节的熵,在数据复杂度更高的地方分配更多的计算和模型容量。我们首次对字节级模型进行了FLOP受控的扩展研究,模型规模达到80亿参数和4万亿训练字节。我们的结果证明了在不使用固定词汇表的情况下,对原始字节训练的模型进行扩展是可行的。由于在数据可预测时动态选择长补丁,训练和推理效率均得到提升,同时在推理和长尾泛化方面也有质的改进。总体而言,在固定推理成本下,通过同时增大补丁和模型规模,BLT显示出比基于词元化的模型更好的扩展性。