We present DeepSeek-V2, a strong Mixture-of-Experts (MoE) language model characterized by economical training and efficient inference. It comprises 236B total parameters, of which 21B are activated for each token, and supports a context length of 128K tokens. DeepSeek-V2 adopts innovative architectures including Multi-head Latent Attention (MLA) and DeepSeekMoE. MLA guarantees efficient inference through significantly compressing the Key-Value (KV) cache into a latent vector, while DeepSeekMoE enables training strong models at an economical cost through sparse computation. Compared with DeepSeek 67B, DeepSeek-V2 achieves significantly stronger performance, and meanwhile saves 42.5% of training costs, reduces the KV cache by 93.3%, and boosts the maximum generation throughput to 5.76 times. We pretrain DeepSeek-V2 on a high-quality and multi-source corpus consisting of 8.1T tokens, and further perform Supervised Fine-Tuning (SFT) and Reinforcement Learning (RL) to fully unlock its potential. Evaluation results show that, even with only 21B activated parameters, DeepSeek-V2 and its chat versions still achieve top-tier performance among open-source models.
翻译:我们提出了DeepSeek-V2,一个强大的专家混合(MoE)语言模型,其特点是训练经济且推理高效。该模型总参数量为2360亿,其中每个令牌激活210亿参数,并支持128K令牌的上下文长度。DeepSeek-V2采用了创新的架构,包括多头潜在注意力(MLA)和DeepSeekMoE。MLA通过将键值(KV)缓存显著压缩为一个潜在向量来保证高效的推理,而DeepSeekMoE则通过稀疏计算以经济成本训练强大模型。与DeepSeek 67B相比,DeepSeek-V2实现了显著更强的性能,同时节省了42.5%的训练成本,将KV缓存减少了93.3%,并将最大生成吞吐量提升至5.76倍。我们在一个由8.1万亿令牌组成的高质量、多源语料库上对DeepSeek-V2进行了预训练,并进一步执行了监督微调(SFT)和强化学习(RL)以充分释放其潜力。评估结果表明,即使仅激活210亿参数,DeepSeek-V2及其聊天版本在开源模型中仍能实现顶级性能。