This paper introduces TinySaver, an early-exit-like dynamic model compression approach which employs tiny models to substitute large models adaptively. Distinct from traditional compression techniques, dynamic methods like TinySaver can leverage the difficulty differences to allow certain inputs to complete their inference processes early, thereby conserving computational resources. Most existing early exit designs are implemented by attaching additional network branches to the model's backbone. Our study, however, reveals that completely independent tiny models can replace a substantial portion of the larger models' job with minimal impact on performance. Employing them as the first exit can remarkably enhance computational efficiency. By searching and employing the most appropriate tiny model as the computational saver for a given large model, the proposed approaches work as a novel and generic method to model compression. This finding will help the research community in exploring new compression methods to address the escalating computational demands posed by rapidly evolving AI models. Our evaluation of this approach in ImageNet-1k classification demonstrates its potential to reduce the number of compute operations by up to 90\%, with only negligible losses in performance, across various modern vision models.
翻译:本文介绍了一种类似早期退出的动态模型压缩方法TinySaver,该方法采用小模型自适应地替代大模型。与传统压缩技术不同,TinySaver等动态方法能够利用输入难度的差异,使某些输入提前完成推理过程,从而节省计算资源。大多数现有的早期退出设计通过在模型主干上附加额外网络分支来实现。然而,我们的研究表明,完全独立的小模型能够以对性能影响最小的方式替代大模型的大部分工作。将其作为首个退出点可显著提升计算效率。通过搜索并采用最合适的小模型作为给定大模型的计算节约器,所提出的方法成为一种新颖且通用的模型压缩方案。这一发现将有助于研究界探索新的压缩方法,以应对快速演进的人工智能模型带来的日益增长的计算需求。我们在ImageNet-1k分类任务上对该方法的评估表明,在各种现代视觉模型中,它能够将计算操作数量减少高达90%,而性能损失微乎其微。