Large Language Models (LLMs) need to be in accordance with human values-being helpful, harmless, and honest (HHH)-is important for safe deployment. Existing works use Supervised Fine-Tuning (SFT) and Mixture-of-Experts (MoE) to align LLMs. However, these works face challenges in multi-objective settings, such as SFT leading to interference between conflicting objectives, while MoEs suffer from miscalibrated routing. We term this failure mode Axis Collapse, marked by (1) disjoint feature spaces causing catastrophic forgetting, and (2) unreliable inference from misrouted experts. To resolve this, we propose AlignX, a two-stage framework. Stage 1 uses prompt-injected fine-tuning to extract axis-specific task features, mitigating catastrophic forgetting. Stage 2 deploys a MoCaE module that calibrates expert routing using fractal and natural geometry, improving inference reliability. AlignX achieves significant gains on Alpaca (Helpfulness), BeaverTails (Harmlessness), and TruthfulQA (Honesty), with +171.5% win rate, +110.1% in truthfulness-informativeness, and 4.3% fewer safety violations. It also reduces latency and memory usage by over 35% compared to prior MoEs. Results across four LLMs validate its generalizability.
翻译:大型语言模型(LLMs)需符合人类价值观——即具备助益性、无害性与诚实性(HHH)——这对安全部署至关重要。现有研究采用监督微调(SFT)与专家混合模型(MoE)来实现LLMs的对齐。然而,这些方法在多目标场景下面临挑战:例如SFT会导致冲突目标间的相互干扰,而MoE则存在路由校准失准的问题。我们将这种失效模式称为“轴向坍缩”,其特征表现为:(1)特征空间割裂导致灾难性遗忘;(2)专家误路由引发不可靠推理。为解决此问题,我们提出AlignX——一个两阶段框架。第一阶段采用提示注入微调技术提取轴向特定任务特征,从而缓解灾难性遗忘。第二阶段部署MoCaE模块,该模块利用分形与自然几何结构校准专家路由,提升推理可靠性。AlignX在Alpaca(助益性)、BeaverTails(无害性)和TruthfulQA(诚实性)基准上取得显著提升:胜率提升+171.5%,真实信息性指标增长+110.1%,安全性违规减少4.3%。相较于现有MoE方法,其延迟与内存占用降低超35%。在四种LLMs上的实验结果验证了该框架的泛化能力。