Deep hashing has been intensively studied and successfully applied in large-scale image retrieval systems due to its efficiency and effectiveness. Recent studies have recognized that the existence of adversarial examples poses a security threat to deep hashing models, that is, adversarial vulnerability. Notably, it is challenging to efficiently distill reliable semantic representatives for deep hashing to guide adversarial learning, and thereby it hinders the enhancement of adversarial robustness of deep hashing-based retrieval models. Moreover, current researches on adversarial training for deep hashing are hard to be formalized into a unified minimax structure. In this paper, we explore Semantic-Aware Adversarial Training (SAAT) for improving the adversarial robustness of deep hashing models. Specifically, we conceive a discriminative mainstay features learning (DMFL) scheme to construct semantic representatives for guiding adversarial learning in deep hashing. Particularly, our DMFL with the strict theoretical guarantee is adaptively optimized in a discriminative learning manner, where both discriminative and semantic properties are jointly considered. Moreover, adversarial examples are fabricated by maximizing the Hamming distance between the hash codes of adversarial samples and mainstay features, the efficacy of which is validated in the adversarial attack trials. Further, we, for the first time, formulate the formalized adversarial training of deep hashing into a unified minimax optimization under the guidance of the generated mainstay codes. Extensive experiments on benchmark datasets show superb attack performance against the state-of-the-art algorithms, meanwhile, the proposed adversarial training can effectively eliminate adversarial perturbations for trustworthy deep hashing-based retrieval. Our code is available at https://github.com/xandery-geek/SAAT.
翻译:深度哈希因其高效性和有效性而被广泛研究并成功应用于大规模图像检索系统。近期研究认识到对抗样本的存在对深度哈希模型构成了安全威胁,即对抗脆弱性。值得注意的是,高效提取可靠的语义表征以指导深度哈希的对抗学习具有挑战性,这阻碍了基于深度哈希的检索模型对抗鲁棒性的提升。此外,当前针对深度哈希的对抗训练研究难以形式化为统一的极小极大结构。本文探索语义感知对抗训练(SAAT)以提升深度哈希模型的对抗鲁棒性。具体而言,我们提出了一种判别性主干特征学习(DMFL)方案,为深度哈希中的对抗学习构建语义表征指导。特别地,我们具有严格理论保证的DMFL以判别性学习方式自适应优化,其中同时兼顾了判别性与语义特性。此外,通过最大化对抗样本哈希码与主干特征之间的汉明距离来生成对抗样本,其有效性在对抗攻击实验中得到了验证。进一步,我们首次在生成的主干编码指导下,将形式化的深度哈希对抗训练表述为统一的极小极大优化问题。在基准数据集上的大量实验表明,所提方法对现有先进算法具有卓越的攻击性能,同时,所提出的对抗训练能有效消除对抗扰动,实现可信赖的基于深度哈希的检索。代码公开于https://github.com/xandery-geek/SAAT。