Test-time entropy minimization helps adapt a model to novel environments and incentivize its reasoning capability, unleashing the model's potential during inference by allowing it to evolve and improve in real-time using its own predictions, achieving promising performance. However, pure entropy minimization can favor non-generalizable shortcuts, such as inflating the logit norm and driving all predictions to a dominant class to reduce entropy, risking collapsed solutions (e.g., constant one-hot outputs) that trivially minimize the objective without meaningful learning. In this paper, we introduce ZeroSiam, an efficient asymmetric Siamese architecture tailored for test-time entropy minimization. ZeroSiam prevents collapse through asymmetric divergence alignment, which is efficiently achieved by a learnable predictor and a stop-gradient operator before the classifier. We provide empirical and theoretical evidence that ZeroSiam not only prevents collapse solutions, but also absorbs and regularizes biased learning signals, enhancing performance even when no collapse occurs. Despite its simplicity, extensive results show that ZeroSiam performs more stably over prior methods using negligible overhead, demonstrating efficacy on both vision adaptation and large language model reasoning tasks across challenging test scenarios and diverse models, including tiny models that are particularly collapse-prone.
翻译:测试时熵最小化有助于模型适应新环境并激励其推理能力,通过允许模型利用自身预测在推理过程中实时演化与改进,从而释放其潜力,并取得优异的性能。然而,纯熵最小化可能倾向于非泛化的捷径,例如增大对数范数或将所有预测推向主导类别以降低熵,这可能导致崩溃解(例如恒定的独热输出),这些解虽能轻易最小化目标函数,却未实现有意义的学习。本文提出ZeroSiam,一种专为测试时熵最小化设计的高效非对称孪生网络架构。ZeroSiam通过非对称散度对齐来防止崩溃,该对齐通过分类器前的可学习预测器和停止梯度算子高效实现。我们提供了实证与理论证据,表明ZeroSiam不仅能防止崩溃解,还能吸收并正则化有偏的学习信号,从而在未发生崩溃时也能提升性能。尽管结构简单,大量实验结果表明,ZeroSiam在可忽略的开销下,比现有方法表现更稳定,在具有挑战性的测试场景和多样化模型(包括特别容易崩溃的小型模型)中,均在视觉适应和大语言模型推理任务上证明了其有效性。