Epistemic uncertainty is crucial for safety-critical applications and out-of-distribution detection tasks. Yet, we uncover a paradoxical phenomenon in deep learning models: an epistemic uncertainty collapse as model complexity increases, challenging the assumption that larger models invariably offer better uncertainty quantification. We propose that this stems from implicit ensembling within large models. To support this hypothesis, we demonstrate epistemic uncertainty collapse empirically across various architectures, from explicit ensembles of ensembles and simple MLPs to state-of-the-art vision models, including ResNets and Vision Transformers -- for the latter, we examine implicit ensemble extraction and decompose larger models into diverse sub-models, recovering epistemic uncertainty. We provide theoretical justification for these phenomena and explore their implications for uncertainty estimation.
翻译:认知不确定性对于安全关键应用和分布外检测任务至关重要。然而,我们在深度学习模型中发现了一个矛盾现象:随着模型复杂度的增加,认知不确定性会发生坍缩,这挑战了“更大模型总能提供更好不确定性量化”的假设。我们认为这源于大型模型内部的隐式集成。为支持这一假设,我们通过实验展示了多种架构中认知不确定性的坍缩现象,从显式的集成之集成和简单的MLP,到包括ResNet和Vision Transformer在内的先进视觉模型——对于后者,我们研究了隐式集成提取方法,并将大型模型分解为多样化的子模型,从而恢复了认知不确定性。我们为这些现象提供了理论依据,并探讨了其对不确定性估计的影响。