The space of task-agnostic feature upsampling has emerged as a promising area of research to efficiently create denser features from pre-trained visual backbones. These methods act as a shortcut to achieve dense features for a fraction of the cost by learning to map low-resolution features to high-resolution versions. While early works in this space used iterative upsampling approaches, more recent works have switched to cross-attention-based methods, which risk falling into the same efficiency scaling problems of the backbones they are upsampling. In this work, we demonstrate that iterative upsampling methods can still compete with cross-attention-based methods; moreover, they can achieve state-of-the-art performance with lower inference costs. We propose UPLiFT, an architecture for Universal Pixel-dense Lightweight Feature Transforms. We also propose an efficient Local Attender operator to overcome the limitations of prior iterative feature upsampling methods. This operator uses an alternative attentional pooling formulation defined fully locally. We show that our Local Attender allows UPLiFT to maintain stable features throughout upsampling, enabling state-of-the-art performance with lower inference costs than existing pixel-dense feature upsamplers. In addition, we apply UPLiFT to generative downstream tasks and show that it achieves competitive performance with state-of-the-art Coupled Flow Matching models for VAE feature upsampling. Altogether, UPLiFT offers a versatile and efficient approach to creating denser features.
翻译:任务无关的特征上采样领域已成为一个有前景的研究方向,旨在从预训练的视觉骨干网络中高效生成更密集的特征。这类方法通过学习将低分辨率特征映射到高分辨率版本,能够以远低于常规计算的成本实现密集特征提取。该领域的早期研究采用迭代上采样方法,而近期工作则转向基于交叉注意力的方案,但后者可能面临与待上采样骨干网络相似的效率扩展问题。本研究表明,迭代上采样方法仍可与基于交叉注意力的方法竞争,且能以更低推理成本达到最先进性能。我们提出UPLiFT——一种通用像素级轻量化特征变换架构,并设计了高效的局部注意力算子以克服先前迭代特征上采样方法的局限性。该算子采用完全局部化的注意力池化公式,使UPLiFT能在整个上采样过程中保持特征稳定性,从而以低于现有像素密集特征上采样器的推理成本实现最优性能。此外,我们将UPLiFT应用于生成式下游任务,证明其在VAE特征上采样方面与最先进的耦合流匹配模型具有相当性能。总体而言,UPLiFT为生成密集特征提供了一种通用且高效的解决方案。