Large Language Models(LLMs) have shown exceptional abilities, yet training these models can be quite challenging. There is a strong dependence on the quality of data and finding the best instruction tuning set. Further, the inherent limitations in training methods create substantial difficulties to train relatively smaller models with 7B and 13B parameters. In our research, we suggest an improved training method for these models by utilising knowledge from larger models, such as a mixture of experts (8x7B) architectures. The scale of these larger models allows them to capture a wide range of variations from data alone, making them effective teachers for smaller models. Moreover, we implement a novel post-training domain alignment phase that employs domain-specific expert models to boost domain-specific knowledge during training while preserving the model's ability to generalise. Fine-tuning Mistral 7B and 2x7B with our method surpasses the performance of state-of-the-art language models with more than 7B and 13B parameters: achieving up to $7.9$ in MT-Bench and $93.04\%$ on AlpacaEval.
翻译:大型语言模型(LLM)已展现出卓越的能力,然而训练这些模型颇具挑战性。模型性能高度依赖于数据质量及最优指令微调集的选取。此外,训练方法的内在局限对训练参数量相对较小的7B和13B模型造成了显著困难。在本研究中,我们提出一种改进的训练方法,通过利用来自更大模型(如混合专家(8x7B)架构)的知识来优化这些较小模型的训练。更大模型的规模使其能够仅从数据中捕捉广泛的变化模式,从而成为小模型的高效教师。此外,我们实施了一种新颖的训练后领域对齐阶段,该阶段采用领域特定的专家模型,以在训练过程中增强领域知识,同时保持模型的泛化能力。使用我们的方法对Mistral 7B和2x7B进行微调,其性能超越了参数量超过7B和13B的先进语言模型:在MT-Bench上最高达到$7.9$分,在AlpacaEval上达到$93.04\%$。