Popular zero-shot models suffer due to artifacts inherited from pretraining. A particularly detrimental artifact, caused by unbalanced web-scale pretraining data, is mismatched label distribution. Existing approaches that seek to repair the label distribution are not suitable in zero-shot settings, as they have incompatible requirements such as access to labeled downstream task data or knowledge of the true label balance in the pretraining distribution. We sidestep these challenges and introduce a simple and lightweight approach to adjust pretrained model predictions via optimal transport. Our technique requires only an estimate of the label distribution of a downstream task. Theoretically, we characterize the improvement produced by our procedure under certain mild conditions and provide bounds on the error caused by misspecification. Empirically, we validate our method in a wide array of zero-shot image and text classification tasks, improving accuracy by 4.8% and 15.9% on average, and beating baselines like Prior Matching -- often by significant margins -- in 17 out of 21 datasets.
翻译:流行的零样本模型因预训练中遗留的伪影而性能受损。由于不平衡的网络规模预训练数据导致的标签分布失配,是一种尤为有害的伪影。现有修复标签分布的方法在零样本场景中并不适用,因为它们要求访问带标签的下游任务数据或掌握预训练分布中真实标签平衡性等不兼容条件。我们规避这些挑战,引入了一种通过最优传输调整预训练模型预测的简单轻量级方法。该技术仅需对下游任务的标签分布进行估计。理论上,我们在某些温和条件下刻画了本流程带来的改进,并给出了因设定错误导致的误差界限。实验中,我们在广泛的零样本图像与文本分类任务中验证了该方法,平均准确率提升4.8%和15.9%,在21个数据集的17个中以显著优势超越了Prior Matching等基线方法。