Low-resolution face recognition is a challenging task due to the missing of informative details. Recent approaches based on knowledge distillation have proven that high-resolution clues can well guide low-resolution face recognition via proper knowledge transfer. However, due to the distribution difference between training and testing faces, the learned models often suffer from poor adaptability. To address that, we split the knowledge transfer process into distillation and adaptation steps, and propose an adaptable instance-relation distillation approach to facilitate low-resolution face recognition. In the approach, the student distills knowledge from high-resolution teacher in both instance level and relation level, providing sufficient cross-resolution knowledge transfer. Then, the learned student can be adaptable to recognize low-resolution faces with adaptive batch normalization in inference. In this manner, the capability of recovering missing details of familiar low-resolution faces can be effectively enhanced, leading to a better knowledge transfer. Extensive experiments on low-resolution face recognition clearly demonstrate the effectiveness and adaptability of our approach.
翻译:低分辨率人脸识别因信息细节缺失而成为一项具有挑战性的任务。基于知识蒸馏的最新方法已证明,通过适当的知识迁移,高分辨率线索能够有效指导低分辨率人脸识别。然而,由于训练人脸与测试人脸之间存在分布差异,学习得到的模型往往适应性较差。为解决这一问题,我们将知识迁移过程分解为蒸馏与适应两个阶段,并提出一种自适应实例关系蒸馏方法以提升低分辨率人脸识别性能。该方法中,学生网络同时在实例层面与关系层面从高分辨率教师网络蒸馏知识,实现充分的跨分辨率知识迁移。随后,通过推理阶段的自适应批归一化处理,已学习的学生网络能够自适应地识别低分辨率人脸。通过这种方式,对已知低分辨率人脸缺失细节的恢复能力得以有效增强,从而实现更优的知识迁移。在低分辨率人脸识别任务上的大量实验充分验证了本方法的有效性与适应性。