Humans judge perceptual similarity according to diverse visual attributes, including scene layout, subject location, and camera pose. Existing vision models understand a wide range of semantic abstractions but improperly weigh these attributes and thus make inferences misaligned with human perception. While vision representations have previously benefited from alignment in contexts like image generation, the utility of perceptually aligned representations in more general-purpose settings remains unclear. Here, we investigate how aligning vision model representations to human perceptual judgments impacts their usability across diverse computer vision tasks. We finetune state-of-the-art models on human similarity judgments for image triplets and evaluate them across standard vision benchmarks. We find that aligning models to perceptual judgments yields representations that improve upon the original backbones across many downstream tasks, including counting, segmentation, depth estimation, instance retrieval, and retrieval-augmented generation. In addition, we find that performance is widely preserved on other tasks, including specialized out-of-distribution domains such as in medical imaging and 3D environment frames. Our results suggest that injecting an inductive bias about human perceptual knowledge into vision models can contribute to better representations.
翻译:人类根据多样化的视觉属性(包括场景布局、主体位置和相机姿态)来判断感知相似性。现有的视觉模型能够理解广泛的语义抽象,但未能恰当权衡这些属性,从而导致其推理与人类感知存在偏差。尽管视觉表示先前已在图像生成等场景中受益于对齐,但感知对齐的表示在更通用场景中的效用仍不明确。本文研究了将视觉模型表示与人类感知判断对齐如何影响其在多样化计算机视觉任务中的可用性。我们在人类对图像三元组的相似性判断上微调了最先进的模型,并在标准视觉基准上对其进行了评估。我们发现,将模型与感知判断对齐所产生的表示,能在许多下游任务(包括计数、分割、深度估计、实例检索和检索增强生成)上优于原始骨干网络。此外,我们发现模型在其他任务上的性能也得到广泛保持,包括医学影像和3D环境帧等专门的分布外领域。我们的结果表明,将关于人类感知知识的归纳偏置注入视觉模型有助于获得更好的表示。