Foundation models have become a dominant paradigm in machine learning, achieving remarkable performance across diverse tasks through large-scale pretraining. However, these models often yield overconfident, uncalibrated predictions. The standard approach to quantifying epistemic uncertainty, training an ensemble of independent models, incurs prohibitive computational costs that scale linearly with ensemble size, making it impractical for large foundation models. We propose Singular Value Ensemble (SVE), a parameter-efficient implicit ensemble method that builds on a simple, but powerful core assumption: namely, that the singular vectors of the weight matrices constitute meaningful subspaces of the model's knowledge. Pretrained foundation models encode rich, transferable information in their weight matrices. If the singular vectors are indeed meaningful (orthogonal) "knowledge directions". To obtain a model ensemble, we modulate only how strongly each direction contributes to the output. Rather than learning entirely new parameters, we freeze the singular vectors and only train per-member singular values that rescale the contribution of each direction in that shared knowledge basis. Ensemble diversity emerges naturally as stochastic initialization and random sampling of mini-batches during joint training cause different members to converge to different combinations of the same underlying knowledge. SVE achieves uncertainty quantification comparable to explicit deep ensembles while increasing the parameter count of the base model by less than 1%, making principled uncertainty estimation accessible in resource-constrained settings. We validate SVE on NLP and vision tasks with various different backbones and show that it improves calibration while maintaining predictive accuracy.
翻译:基础模型已成为机器学习的主导范式,通过大规模预训练在多样化任务中取得了卓越性能。然而,这些模型往往产生过度自信且未校准的预测。量化认知不确定性的标准方法——训练独立模型集成——会产生与集成规模线性增长的过高计算成本,使其难以应用于大型基础模型。我们提出奇异值集成(SVE),这是一种参数高效的隐式集成方法,其建立在一个简单而强大的核心假设之上:权重矩阵的奇异向量构成了模型知识的有意义子空间。预训练基础模型在其权重矩阵中编码了丰富且可迁移的信息。如果奇异向量确实是具有意义(正交)的“知识方向”,那么为了获得模型集成,我们仅需调节每个方向对输出的贡献强度。我们冻结奇异向量,仅训练每个集成成员独有的奇异值,这些奇异值在共享知识基中重新调节每个方向的贡献,而非学习全新参数。集成多样性自然涌现:联合训练期间的随机初始化和小批量随机采样导致不同成员收敛至相同底层知识的不同组合。SVE实现了与显式深度集成相当的不确定性量化,同时将基础模型的参数增量控制在1%以内,使得资源受限环境下也能进行原则性的不确定性估计。我们在多种骨干网络的NLP和视觉任务上验证了SVE,结果表明该方法在保持预测准确性的同时改善了校准性能。