This work presents a Fully BInarized Large Language Model (FBI-LLM), demonstrating for the first time how to train a large-scale binary language model from scratch (not the partial binary or ternary LLM like BitNet b1.58) to match the performance of its full-precision counterparts (e.g., FP16 or BF16) in transformer-based LLMs. It achieves this by employing an autoregressive distillation (AD) loss with maintaining equivalent model dimensions (130M, 1.3B, 7B) and training data volume as regular LLM pretraining, while delivering competitive results in terms of perplexity and task-specific effectiveness. Intriguingly, by analyzing the training trajectory, we find that the pretrained weight is not necessary for training binarized LLMs from scratch. This research encourages a new computational framework and may facilitate the future design of specialized hardware tailored for fully 1-bit LLMs. We make all models, code, and training dataset fully accessible and transparent to support further research (Code: https://github.com/LiqunMa/FBI-LLM. Model: https://huggingface.co/LiqunMa/).
翻译:本研究提出了一种全二值化大语言模型(FBI-LLM),首次展示了如何从头开始训练一个大规模二值语言模型(而非BitNet b1.58等部分二值或三值LLM),使其在基于Transformer架构的大语言模型中达到与全精度模型(如FP16或BF16)相当的性能。该方法通过采用自回归蒸馏损失函数实现,同时保持与常规LLM预训练相当的模型规模(130M、1.3B、7B)和训练数据量,并在困惑度与任务特定效能方面取得具有竞争力的结果。值得注意的是,通过分析训练轨迹,我们发现预训练权重对于从头训练二值化LLM并非必需。这项研究推动了一种新的计算框架的发展,并可能促进未来针对全1比特LLM的专用硬件设计。我们公开了所有模型、代码和训练数据集以支持后续研究(代码:https://github.com/LiqunMa/FBI-LLM,模型:https://huggingface.co/LiqunMa/)。