We present the e-Llama models: 8 billion and 70 billion parameter large language models that are adapted towards the e-commerce domain. These models are meant as foundation models with deep knowledge about e-commerce, that form a base for instruction- and fine-tuning. The e-Llama models are obtained by continuously pretraining the Llama 3.1 base models on 1 trillion tokens of domain-specific data. We discuss our approach and motivate our choice of hyperparameters with a series of ablation studies. To quantify how well the models have been adapted to the e-commerce domain, we define and implement a set of multilingual, e-commerce specific evaluation tasks. We show that, when carefully choosing the training setup, the Llama 3.1 models can be adapted towards the new domain without sacrificing significant performance on general domain tasks. We also explore the possibility of merging the adapted model and the base model for a better control of the performance trade-off between domains.
翻译:我们提出了e-Llama模型:包含80亿和700亿参数的、针对电子商务领域进行适应的大语言模型。这些模型旨在成为具备电子商务深度知识的基座模型,为指令微调和精调提供基础。e-Llama模型通过对Llama 3.1基座模型在1万亿领域特定数据标记上进行持续预训练获得。我们阐述了研究方法,并通过一系列消融实验论证了超参数选择的合理性。为量化模型对电子商务领域的适应程度,我们定义并实施了一套多语言的、电子商务专属评估任务。研究表明,通过精心设计训练配置,Llama 3.1模型能够在适应新领域的同时,保持其在通用领域任务上的性能无明显下降。我们还探索了将适应模型与基座模型融合的可能性,以期实现对跨领域性能权衡的更优控制。