Ensembles of separate neural networks (NNs) have shown superior accuracy and confidence calibration over single NN across tasks. Recent methods compress ensembles within a single network via early exits or multi-input multi-output frameworks. However, the landscape of these methods is fragmented thus far, making it difficult to choose the right approach for a given task. Furthermore, the algorithmic performance of these methods is behind the ensemble of separate NNs and requires extensive architecture tuning. We propose a novel methodology unifying these approaches into a Single Architecture Ensemble (SAE). Our method learns the optimal number and depth of exits per ensemble input in a single NN. This enables the SAE framework to flexibly tailor its configuration for a given architecture or application. We evaluate SAEs on image classification and regression across various network architecture types and sizes. We demonstrate competitive accuracy or confidence calibration to baselines while reducing the compute operations or parameter count by up to $1.5{\sim}3.7\times$.
翻译:独立神经网络(NN)的集成在多种任务中展现出比单一神经网络更优的准确性和置信度校准能力。近期方法通过早退机制或多输入多输出框架将集成压缩至单个网络内。然而,这些方法目前仍缺乏系统化整合,导致针对特定任务难以选择合适方案。此外,这些方法的算法性能仍落后于独立NN集成,且需要大量架构调整。我们提出一种创新方法论,将上述方法统一为单架构集成(SAE)。该方法可在单个NN中学习每个集成输入的最优出口数量与深度,使SAE框架能灵活适配给定架构或应用场景的配置。我们在不同网络架构类型与规模的图像分类与回归任务上评估SAE。实验表明,SAE在保持与基线方法相当的准确性或置信度校准的同时,可将计算操作量或参数量减少$1.5{\sim}3.7$倍。