Effective retrieval across both seen and unseen categories is crucial for modern image retrieval systems. Retrieval on seen categories ensures precise recognition of known classes, while retrieval on unseen categories promotes generalization to novel classes with limited supervision. However, most existing deep hashing methods are confined to a single training paradigm, either pointwise or pairwise, where the former excels on seen categories and the latter generalizes better to unseen ones. To overcome this limitation, we propose Unified Hashing (UniHash), a dual-branch framework that unifies the strengths of both paradigms to achieve balanced retrieval performance across seen and unseen categories. UniHash consists of two complementary branches: a center-based branch following the pointwise paradigm and a pairwise branch following the pairwise paradigm. A novel hash code learning method is introduced to enable bidirectional knowledge transfer between branches, improving hash code discriminability and generalization. It employs a mutual learning loss to align hash representations and introduces a Split-Merge Mixture of Hash Experts (SM-MoH) module to enhance cross-branch exchange of hash representations. Theoretical analysis substantiates the effectiveness of UniHash, and extensive experiments on CIFAR-10, MSCOCO, and ImageNet demonstrate that UniHash consistently achieves state-of-the-art performance in both seen and unseen image retrieval scenarios.
翻译:在可见与未见类别上实现有效检索对于现代图像检索系统至关重要。可见类别检索确保对已知类别的精确识别,而未见类别检索则促进在有限监督下对新类别的泛化能力。然而,现有深度哈希方法大多局限于单一训练范式(点对范式或对偶范式),前者在可见类别上表现优异,后者则在未见类别上具有更好的泛化能力。为克服这一局限,本文提出统一哈希(UniHash),一种双分支框架,通过融合两种范式的优势,实现在可见与未见类别上的均衡检索性能。UniHash包含两个互补分支:遵循点对范式的基于中心的分支和遵循对偶范式的对偶分支。我们引入一种新颖的哈希码学习方法,实现分支间的双向知识迁移,从而提升哈希码的判别性与泛化能力。该方法采用互学习损失对齐哈希表示,并引入分割-合并哈希专家混合(SM-MoH)模块以增强哈希表示的跨分支交换。理论分析证实了UniHash的有效性,在CIFAR-10、MSCOCO和ImageNet数据集上的大量实验表明,UniHash在可见与未见图像检索场景中均能持续取得最先进的性能。