Longitudinal face recognition in children remains challenging due to rapid and nonlinear facial growth, which causes template drift and increasing verification errors over time. This work investigates whether synthetic face data can act as a longitudinal stabilizer by improving temporal robustness of child face recognition models. Using an identity disjoint protocol on the Young Face Aging (YFA) dataset, we evaluate three settings: (i) pretrained MagFace embeddings without dataset specific fine-tuning, (ii) MagFace fine-tuned using authentic training faces only, and (iii) MagFace fine-tuned using a combination of authentic and synthetically generated training faces. Synthetic data is generated using StyleGAN2 ADA and incorporated exclusively within the training identities; a post generation filtering step is applied to mitigate identity leakage and remove artifact affected samples. Experimental results across enrollment verification gaps from 6 to 36 months show that synthetic-augmented fine tuning substantially reduces error rates relative to both the pretrained baseline and real only fine tuning. These findings provide a risk aware assessment of synthetic augmentation for improving identity persistence in pediatric face recognition.
翻译:由于儿童面部快速且非线性的生长导致模板漂移和随时间推移验证误差增加,儿童纵向人脸识别仍具挑战性。本研究探讨合成人脸数据能否作为纵向稳定器,通过提升儿童人脸识别模型的时间鲁棒性来应对此问题。基于Young Face Aging (YFA)数据集采用身份分离协议,我们评估了三种设置:(i) 未经数据集特定微调的预训练MagFace嵌入,(ii) 仅使用真实训练人脸微调的MagFace,(iii) 结合真实与合成生成训练人脸微调的MagFace。合成数据使用StyleGAN2 ADA生成,并严格限定在训练身份内使用;采用生成后过滤步骤以缓解身份泄漏并剔除存在伪影的样本。在6至36个月注册验证间隔的实验结果表明,相较于预训练基线和仅使用真实数据的微调,合成数据增强的微调显著降低了错误率。这些发现为合成数据增强在提升儿科人脸识别身份持久性方面提供了风险感知评估。