Large language models (LLMs) suffer from catastrophic forgetting in sequential multi-task learning. Existing parameter regularization methods (e.g., O-LoRA, N-LoRA) mitigate interference via low-rank subspace orthogonality, but additive updates distort the intrinsic geometry of model parameters. We propose \textbf{OLieRA}, a Lie group based fine-tuning framework that preserves parameter geometry through multiplicative updates while enforcing orthogonality across task subspaces. OLieRA achieves state-of-the-art performance on the Standard CL benchmark and remains highly competitive under large task sequences. It further inherits the replay-free and task-ID free inference properties of O-LoRA, establishing a principled paradigm for continual learning in LLMs.
翻译:大语言模型在顺序多任务学习中存在灾难性遗忘问题。现有参数正则化方法(如O-LoRA、N-LoRA)通过低秩子空间正交性缓解任务间干扰,但加性更新会扭曲模型参数的内在几何结构。本文提出\textbf{OLieRA},一种基于李群的微调框架,通过乘性更新保持参数几何特性,同时强制任务子空间间的正交性。OLieRA在Standard CL基准测试中取得最先进性能,并在长任务序列下保持高度竞争力。该方法进一步继承了O-LoRA的无重放与无任务标识推理特性,为大语言模型的持续学习建立了理论严谨的范式。