Despite the growing prevalence of large language model (LLM) architectures, a crucial concern persists regarding their energy and power consumption, which still lags far behind the remarkable energy efficiency of the human brain. Recent strides in spiking language models (LM) and transformer architectures aim to address this concern by harnessing the spiking activity of biological neurons to enhance energy/power efficiency. Doubling down on the principles of model quantization and energy efficiency, this paper proposes the development of a novel binary/ternary (1/1.58-bit) spiking LM architecture. Achieving scalability comparable to a deep spiking LM architecture is facilitated by an efficient knowledge distillation technique, wherein knowledge from a non-spiking full-precision "teacher" model is transferred to an extremely weight quantized spiking "student" LM. Our proposed model represents a significant advancement as the first-of-its-kind 1/1.58-bit spiking LM, and its performance is rigorously evaluated on multiple text classification tasks of the GLUE benchmark.
翻译:尽管大型语言模型(LLM)架构日益普及,但其能耗与功率消耗问题依然备受关注,其能效仍远逊于人脑的卓越能效。近期在脉冲语言模型(LM)和Transformer架构方面的进展,旨在通过利用生物神经元的脉冲活动来提升能量/功率效率。本文基于模型量化与能效原则,进一步提出开发一种新颖的二进制/三值(1/1.58位)脉冲LM架构。通过一种高效的知识蒸馏技术——将非脉冲全精度“教师”模型的知识迁移至极端权重量化的脉冲“学生”LM,实现了与深度脉冲LM架构相媲美的可扩展性。我们提出的模型作为首个1/1.58位脉冲LM,代表了该领域的重大进展,并在GLUE基准的多个文本分类任务上进行了严格的性能评估。