Multi-domain machine translation (MDMT) aims to build a unified model capable of translating content across diverse domains. Despite the impressive machine translation capabilities demonstrated by large language models (LLMs), domain adaptation still remains a challenge for LLMs. Existing MDMT methods such as in-context learning and parameter-efficient fine-tuning often suffer from domain shift, parameter interference and limited generalization. In this work, we propose a neuron-efficient fine-tuning framework for MDMT that identifies and updates consensus-aligned neurons within LLMs. These neurons are selected by maximizing the mutual information between neuron behavior and domain features, enabling LLMs to capture both generalizable translation patterns and domain-specific nuances. Our method then fine-tunes LLMs guided by these neurons, effectively mitigating parameter interference and domain-specific overfitting. Comprehensive experiments on three LLMs across ten German-English and Chinese-English translation domains evidence that our method consistently outperforms strong PEFT baselines on both seen and unseen domains, achieving state-of-the-art performance.
翻译:多领域机器翻译旨在构建一个能够跨不同领域翻译内容的统一模型。尽管大语言模型已展现出令人印象深刻的机器翻译能力,但领域适应对大语言模型而言仍是一个挑战。现有的多领域机器翻译方法,如上下文学习和参数高效微调,常受限于领域偏移、参数干扰和泛化能力不足。本文提出一种面向多领域机器翻译的神经元高效微调框架,该框架识别并更新大语言模型内的共识对齐神经元。这些神经元通过最大化神经元行为与领域特征之间的互信息进行选取,使大语言模型能够同时捕捉可泛化的翻译模式和领域特定的细微差别。随后,我们的方法以这些神经元为指导对大语言模型进行微调,有效缓解了参数干扰和领域特定过拟合问题。在十个德英和中英翻译领域上对三个大语言模型进行的全面实验表明,我们的方法在已见和未见领域上均持续优于强大的参数高效微调基线,实现了最先进的性能。