Cloth-changing person re-identification (CC-ReID), also known as Long-Term Person Re-Identification (LT-ReID) is a critical and challenging research topic in computer vision that has recently garnered significant attention. However, due to the high cost of constructing CC-ReID data, the existing data-driven models are hard to train efficiently on limited data, causing overfitting issue. To address this challenge, we propose a low-cost and efficient pipeline for generating controllable and high-quality synthetic data simulating the surveillance of real scenarios specific to the CC-ReID task. Particularly, we construct a new self-annotated CC-ReID dataset named Cloth-Changing Unreal Person (CCUP), containing 6,000 IDs, 1,179,976 images, 100 cameras, and 26.5 outfits per individual. Based on this large-scale dataset, we introduce an effective and scalable pretrain-finetune framework for enhancing the generalization capabilities of the traditional CC-ReID models. The extensive experiments demonstrate that two typical models namely TransReID and FIRe^2, when integrated into our framework, outperform other state-of-the-art models after pretraining on CCUP and finetuning on the benchmarks such as PRCC, VC-Clothes and NKUP. The CCUP is available at: https://github.com/yjzhao1019/CCUP.
翻译:换装行人重识别(CC-ReID),亦称长期行人重识别(LT-ReID),是计算机视觉中一个关键且富有挑战性的研究课题,近年来受到广泛关注。然而,由于构建CC-ReID数据的成本高昂,现有数据驱动模型难以在有限数据上高效训练,导致过拟合问题。为应对这一挑战,我们提出了一种低成本、高效的流程,用于生成可控且高质量的合成数据,以模拟针对CC-ReID任务真实场景的监控环境。具体而言,我们构建了一个新的自标注CC-ReID数据集,命名为“换装虚拟行人”(CCUP),包含6,000个身份、1,179,976张图像、100个摄像头,以及每人平均26.5套服装。基于此大规模数据集,我们引入了一个有效且可扩展的预训练-微调框架,以增强传统CC-ReID模型的泛化能力。大量实验表明,两种典型模型——TransReID和FIRe^2,在融入我们的框架后,通过在CCUP上进行预训练并在PRCC、VC-Clothes和NKUP等基准数据集上微调,其性能超越了其他最先进的模型。CCUP数据集可通过以下链接获取:https://github.com/yjzhao1019/CCUP。