We focus on ocular biometrics, specifically the periocular region (the area around the eye), which offers high discrimination and minimal acquisition constraints. We evaluate three Convolutional Neural Network architectures of varying depth and complexity to assess their effectiveness for periocular recognition. The networks are trained on 1,907,572 ocular crops extracted from the large-scale VGGFace2 database. This significantly contrasts with existing works, which typically rely on small-scale periocular datasets for training having only a few thousand images. Experiments are conducted with ocular images from VGGFace2-Pose, a subset of VGGFace2 containing in-the-wild face images, and the UFPR-Periocular database, which consists of selfies captured via mobile devices with user guidance on the screen. Due to the uncontrolled conditions of VGGFace2, the Equal Error Rates (EERs) obtained with ocular crops range from 9-15%, noticeably higher than the 3-6% EERs achieved using full-face images. In contrast, UFPR-Periocular yields significantly better performance (EERs of 1-2%), thanks to higher image quality and more consistent acquisition protocols. To the best of our knowledge, these are the lowest reported EERs on the UFPR dataset to date.
翻译:本文聚焦于眼部生物特征识别,特别是眼周区域(眼睛周围的区域),该区域具有高区分度和低采集约束的特点。我们评估了三种不同深度和复杂度的卷积神经网络架构,以评估其在眼周识别中的有效性。这些网络在大规模VGGFace2数据库中提取的1,907,572个眼部裁剪图像上进行训练。这与现有研究形成显著对比,后者通常依赖仅包含数千张图像的小规模眼周数据集进行训练。实验使用来自VGGFace2-Pose(VGGFace2的一个子集,包含自然场景下的人脸图像)和UFPR-Periocular数据库的眼部图像进行。UFPR-Periocular数据库包含通过移动设备在屏幕用户引导下拍摄的自拍图像。由于VGGFace2图像采集条件不受控制,使用眼部裁剪图像获得的等错误率在9-15%之间,明显高于使用全脸图像获得的3-6%等错误率。相比之下,UFPR-Periocular数据库由于图像质量更高且采集协议更一致,表现出显著更好的性能(等错误率为1-2%)。据我们所知,这是在UFPR数据集上迄今报道的最低等错误率。