Despite outstanding processes in many tasks, Large Language Models (LLMs) still lack accuracy when dealing with highly technical domains. Especially, telecommunications (telco) is a particularly challenging domain due the large amount of lexical, semantic and conceptual peculiarities. Yet, this domain holds many valuable use cases, directly linked to industrial needs. Hence, this paper studies how LLMs can be adapted to the telco domain. It reports our effort to (i) collect a massive corpus of domain-specific data (800M tokens, 80K instructions), (ii) perform adaptation using various methodologies, and (iii) benchmark them against larger generalist models in downstream tasks that require extensive knowledge of telecommunications. Our experiments on Llama-2-7b show that domain-adapted models can challenge the large generalist models. They also suggest that adaptation can be restricted to a unique instruction-tuning step, dicarding the need for any fine-tuning on raw texts beforehand.
翻译:尽管大型语言模型(LLM)在许多任务中取得了显著进展,但在处理高度技术性领域时仍存在准确性不足的问题。电信领域因其大量词汇、语义和概念上的特殊性而成为一个极具挑战性的领域。然而,该领域蕴藏着许多与工业需求直接相关的宝贵应用场景。因此,本文研究了如何将LLM适配至电信领域。我们系统性地开展了以下工作:(1)收集大规模领域专用数据(8亿词元,8万条指令);(2)采用多种方法进行领域适配;(3)在需要电信领域深度知识的若干下游任务中,将适配后的模型与规模更大的通用模型进行基准比较。基于Llama-2-7b的实验表明,经过领域适配的模型能够挑战大型通用模型。实验结果还表明,适配过程可仅通过指令微调步骤实现,无需预先对原始文本进行任何微调。