Low-Rank Adaptation (LoRA) fusion enables the composition of learned subject and style representations for controllable generation without retraining. However, existing methods rely on weight-based merging within a shared adaptation space, where independently trained LoRAs interfere and degrade fidelity. We show that this interference is fundamentally geometric: content and style LoRAs occupy overlapping, non-orthogonal low-rank subspaces, making weight-based fusion inherently flawed. Analyzing LoRA internal structure, we find that generative behavior is dominated by a few principal directions that must be preserved during fusion. Based on this insight, we reformulate LoRA fusion as a null-space projection problem and propose Null Space Projection LoRA (NP-LoRA), a projection-based framework that enforces subspace separation by construction. NP-LoRA extracts principal style directions via singular value decomposition (SVD) and projects the subject LoRA into the orthogonal complement of the style subspace, preventing interference. We further introduce a soft projection mechanism that provides continuous control over the trade-off between subject fidelity and style preservation. Experiments show that NP-LoRA consistently outperforms strong baselines and generalizes well across pretrained LoRA pairs without retraining.
翻译:低秩自适应(LoRA)融合使得无需重新训练即可组合已学习的主体与风格表示,从而实现可控生成。然而,现有方法依赖于共享自适应空间内的权重合并,其中独立训练的LoRA模块会相互干扰并降低保真度。我们证明这种干扰本质上是几何性的:内容与风格LoRA占据重叠且非正交的低秩子空间,使得基于权重的融合方法存在固有缺陷。通过分析LoRA的内部结构,我们发现生成行为主要由少数主导方向决定,这些方向必须在融合过程中得以保留。基于这一洞见,我们将LoRA融合重新表述为零空间投影问题,并提出零空间投影LoRA(NP-LoRA)——一个基于投影的框架,通过构造强制实现子空间分离。NP-LoRA通过奇异值分解(SVD)提取主要风格方向,并将主体LoRA投影到风格子空间的正交补空间中,从而防止干扰。我们进一步引入软投影机制,为主体保真度与风格保持之间的权衡提供连续控制。实验表明,NP-LoRA始终优于现有强基线方法,并且能在无需重新训练的情况下,良好地泛化至不同的预训练LoRA组合。