The alignment of large language models (LLMs) is critical for developing effective and safe language models. Traditional approaches focus on aligning models during the instruction tuning or reinforcement learning stages, referred to in this paper as `post alignment'. We argue that alignment during the pre-training phase, which we term `native alignment', warrants investigation. Native alignment aims to prevent unaligned content from the beginning, rather than relying on post-hoc processing. This approach leverages extensively aligned pre-training data to enhance the effectiveness and usability of pre-trained models. Our study specifically explores the application of native alignment in the context of Arabic LLMs. We conduct comprehensive experiments and ablation studies to evaluate the impact of native alignment on model performance and alignment stability. Additionally, we release open-source Arabic LLMs that demonstrate state-of-the-art performance on various benchmarks, providing significant benefits to the Arabic LLM community.
翻译:大语言模型(LLM)的对齐对于开发有效且安全的语言模型至关重要。传统方法侧重于在指令微调或强化学习阶段进行模型对齐,本文称之为“后验对齐”。我们认为,在预训练阶段进行的对齐——我们称之为“本土对齐”——值得深入研究。本土对齐旨在从一开始就防止未对齐内容的产生,而非依赖事后处理。该方法利用大量经过对齐的预训练数据,以提升预训练模型的有效性和可用性。本研究特别探讨了本土对齐在阿拉伯语大语言模型中的应用。我们进行了全面的实验和消融研究,以评估本土对齐对模型性能和对齐稳定性的影响。此外,我们发布了开源的阿拉伯语大语言模型,这些模型在多个基准测试中展现了最先进的性能,为阿拉伯语大语言模型社区带来了显著益处。