In cross-domain retrieval, a model is required to identify images from the same semantic category across two visual domains. For instance, given a sketch of an object, a model needs to retrieve a real image of it from an online store's catalog. A standard approach for such a problem is learning a feature space of images where Euclidean distances reflect similarity. Even without human annotations, which may be expensive to acquire, prior methods function reasonably well using unlabeled images for training. Our problem constraint takes this further to scenarios where the two domains do not necessarily share any common categories in training data. This can occur when the two domains in question come from different versions of some biometric sensor recording identities of different people. We posit a simple solution, which is to generate synthetic data to fill in these missing category examples across domains. This, we do via category preserving translation of images from one visual domain to another. We compare approaches specifically trained for this translation for a pair of domains, as well as those that can use large-scale pre-trained text-to-image diffusion models via prompts, and find that the latter can generate better replacement synthetic data, leading to more accurate cross-domain retrieval models. Our best SynCDR model can outperform prior art by up to 15\%. Code for our work is available at https://github.com/samarth4149/SynCDR .
翻译:在跨域检索中,模型需要识别两个视觉域中属于同一语义类别的图像。例如,给定一个物体的草图,模型需从在线商店目录中检索其对应的真实图像。此类问题的标准方法是学习一个图像特征空间,其中欧氏距离反映相似度。即使没有昂贵的人工标注,现有方法也能利用无标签图像进行合理训练。然而,我们的问题约束进一步扩展至训练数据中两个域可能不共享任何公共类别的场景——例如,当两个域来自不同版本的生物识别传感器,且记录的是不同用户的身份信息时。我们提出一个简单解决方案:生成合成数据以填补跨域缺失的类别样例。具体而言,通过保留类别的图像跨域转换实现这一目标。我们比较了针对特定域对专门训练的转换方法,以及利用大规模预训练文本到图像扩散模型(通过提示词生成)的方法,发现后者能生成更优的替代合成数据,从而提升跨域检索模型的准确性。我们的最佳SynCDR模型相较现有技术性能提升高达15%。相关代码已开源:https://github.com/samarth4149/SynCDR 。