Fine-tuning large language models (LLMs) on telecom datasets is a common practice to adapt general-purpose models to the telecom domain. However, little attention has been paid to how this process may compromise model safety. Recent research has shown that even benign fine-tuning can degrade the safety alignment of LLMs, causing them to respond to harmful or unethical user queries. In this paper, we investigate this issue by fine-tuning LLMs on three representative telecom datasets and show that safety degrades even for light telecom domain adaptation. To this end, we introduce TeleHarm, the first telecom-specific red-teaming benchmark, which we use alongside established DirectHarm and HexPhi datasets to systematically assess harmful behavior. We further extend our analysis to publicly available TeleLLMs that were continually pre-trained on large telecom corpora, revealing that safety alignment is severely lacking, primarily due to the omission of safety-focused instruction tuning. To address these issues, we evaluate three realignment defenses: SafeInstruct, SafeLoRA, SafeMERGE. We show that, across all settings, the proposed defenses can effectively restore safety without compromising telecom task performance, leading to Safe teleCOMMunication (SafeCOMM) models. Our work serves as both a diagnostic study and practical guide for safety realignment in telecom-tuned LLMs, underscoring the need for safety-aware instruction and fine-tuning in the telecom domain.
翻译:在电信数据集上微调大语言模型是将通用模型适配至电信领域的常见做法。然而,这一过程可能损害模型安全性的问题尚未得到充分关注。近期研究表明,即使是良性的微调也可能削弱大语言模型的安全对齐性,导致其响应有害或不道德的用户查询。本文通过在三类代表性电信数据集上微调大语言模型,揭示了即使进行轻量级电信领域适配也会导致安全性退化。为此,我们提出了首个电信领域专用红队测试基准TeleHarm,并结合成熟的DirectHarm与HexPhi数据集系统评估有害行为。我们进一步对通过大规模电信语料持续预训练的公开TeleLLMs展开分析,发现这些模型严重缺乏安全对齐性,其主要原因在于忽略了以安全为导向的指令微调。针对这些问题,我们评估了三种安全重对齐防御方案:SafeInstruct、SafeLoRA与SafeMERGE。实验表明,在所有设定下,所提出的防御方案均能有效恢复模型安全性,同时保持电信任务性能,最终形成安全电信通信模型。本研究既是对电信领域微调大语言模型安全性重对齐的诊断性分析,也是实用指南,强调了在电信领域开展安全感知的指令微调与模型适配的必要性。