Mixture-of-Experts (MoE) has become a prominent paradigm for scaling Large Language Models (LLMs). Parameter-efficient fine-tuning (PEFT), such as LoRA, is widely adopted to adapt pretrained MoE LLMs to downstream tasks. However, existing approaches assign identical LoRA ranks to all experts, overlooking the intrinsic functional specialization within MoE LLMs. This uniform allocation leads to resource mismatch, task-relevant experts are under-provisioned while less relevant ones receive redundant parameters. We propose a Dynamic Rank LoRA framework named DR-LoRA, which dynamically grows expert LoRA ranks during fine-tuning based on task-specific demands. DR-LoRA employs an Expert Saliency Scoring mechanism that integrates expert routing frequency and LoRA rank importance to quantify each expert's demand for additional capacity. Experts with higher saliency scores are prioritized for rank expansion, enabling the automatic formation of a heterogeneous rank distribution tailored to the target task. Experiments on multiple benchmarks demonstrate that DR-LoRA consistently outperforms standard LoRA and static allocation strategies under the same parameter budget, achieving superior task performance with more efficient parameter utilization.
翻译:专家混合模型已成为扩展大型语言模型的重要范式。参数高效微调技术,例如LoRA,被广泛用于将预训练的MoE大型语言模型适配至下游任务。然而,现有方法为所有专家分配相同的LoRA秩,忽略了MoE大型语言模型内在的功能专业化特性。这种均匀分配会导致资源错配:与任务相关的专家配置不足,而相关性较低的专家却获得了冗余参数。我们提出了一种名为DR-LoRA的动态秩LoRA框架,该框架能够根据任务特定需求,在微调过程中动态增长专家的LoRA秩。DR-LoRA采用一种专家显著性评分机制,该机制综合了专家路由频率和LoRA秩重要性,以量化每个专家对额外容量的需求。具有较高显著性评分的专家会被优先进行秩扩展,从而能够自动形成针对目标任务的异构秩分布。在多个基准测试上的实验表明,在相同参数预算下,DR-LoRA始终优于标准LoRA及静态分配策略,以更高的参数利用效率实现了更优的任务性能。