Vision--Language Models (VLMs) have emerged as the dominant approach for zero-shot recognition, adept at handling diverse scenarios and significant distribution changes. However, their deployment in risk-sensitive areas requires a deeper understanding of their uncertainty estimation capabilities, a relatively uncharted area. In this study, we explore the calibration properties of VLMs across different architectures, datasets, and training strategies. In particular, we analyze the uncertainty estimation performance of VLMs when calibrated in one domain, label set or hierarchy level, and tested in a different one. Our findings reveal that while VLMs are not inherently calibrated for uncertainty, temperature scaling significantly and consistently improves calibration, even across shifts in distribution and changes in label set. Moreover, VLMs can be calibrated with a very small set of examples. Through detailed experimentation, we highlight the potential applications and importance of our insights, aiming for more reliable and effective use of VLMs in critical, real-world scenarios.
翻译:视觉语言模型(VLMs)已成为零样本识别的主流方法,擅长处理多样化场景和显著的分布变化。然而,将其部署于风险敏感领域需要深入理解其不确定性估计能力——这一领域仍相对空白。本研究探索了不同架构、数据集和训练策略下VLMs的校准特性。我们重点分析了VLMs在某个领域、标签集或层级进行校准后,在另一种领域、标签集或层级测试时的不确定性估计性能。研究发现:尽管VLMs本身不具备内在的不确定性校准能力,但温度缩放方法能显著且稳定地提升校准效果,即使在分布偏移和标签集变化的情况下依然有效。此外,VLMs仅需极少量样本即可完成校准。通过系统实验,我们揭示了这些发现的潜在应用价值与重要性,旨在推动VLMs在关键现实场景中更可靠、更高效地应用。