Test-time entropy minimization helps adapt a model to novel environments and incentivize its reasoning capability, unleashing the model's potential during inference by allowing it to evolve and improve in real-time using its own predictions, achieving promising performance. However, pure entropy minimization can favor non-generalizable shortcuts, such as inflating the logit norm and driving all predictions to a dominant class to reduce entropy, risking collapsed solutions (e.g., constant one-hot outputs) that trivially minimize the objective without meaningful learning. In this paper, we reveal asymmetry as a key mechanism for collapse prevention and introduce ZeroSiam--an efficient asymmetric Siamese architecture tailored for test-time entropy minimization. ZeroSiam prevents collapse through asymmetric divergence alignment, efficiently achieved by a learnable predictor and a stop-gradient operator before the classifier. We provide empirical and theoretical evidence that ZeroSiam not only prevents collapse, but also regularizes biased learning signals, enhancing performance even when no collapse occurs. Despite its simplicity, extensive results show that ZeroSiam performs more stably over prior methods using negligible overhead, demonstrating efficacy on both vision adaptation and large language model reasoning tasks across challenging test scenarios and diverse models, including particularly collapse-prone tiny models.
翻译:测试时熵最小化有助于模型适应新环境并激励其推理能力,通过允许模型利用自身预测在推理过程中实时演化与改进,释放模型潜力,从而取得优异性能。然而,纯粹的熵最小化可能倾向于非泛化的捷径,例如通过放大逻辑值范数或将所有预测推向主导类别以降低熵,这可能导致崩溃解(如恒定独热输出)——此类解虽能轻易最小化目标函数,却未实现有意义的学习。本文揭示非对称性是防止崩溃的关键机制,并提出了ZeroSiam——一种专为测试时熵最小化设计的高效非对称孪生网络架构。ZeroSiam通过非对称散度对齐防止崩溃,该机制通过分类器前的可学习预测器与梯度停止算子高效实现。我们通过实证与理论证明,ZeroSiam不仅能防止崩溃,还能正则化有偏的学习信号,从而在未发生崩溃时也能提升性能。尽管设计简洁,大量实验结果表明ZeroSiam在可忽略的额外开销下,较现有方法表现更稳定,在视觉适应与大语言模型推理任务中均展现出有效性,覆盖了具有挑战性的测试场景与多样化模型(包括特别易发生崩溃的微型模型)。