Smartphone-based iris recognition in the visible spectrum (VIS) remains difficult due to illumination variability, pigmentation differences, and the absence of standardized capture controls. This work presents a compact end-to-end pipeline that enforces ISO/IEC 29794-6 quality compliance at acquisition and demonstrates that accurate VIS iris recognition is feasible on commodity devices. Using a custom Android application performing real-time framing, sharpness evaluation, and feedback, we introduce the CUVIRIS dataset of 752 compliant images from 47 subjects. A lightweight MobileNetV3-based multi-task segmentation network (LightIrisNet) is developed for efficient on-device processing, and a transformer matcher (IrisFormer) is adapted to the VIS domain. Under a standardized protocol and comparative benchmarking against prior CNN baselines, OSIRIS attains a TAR of 97.9% at FAR=0.01 (EER=0.76%), while IrisFormer, trained only on UBIRIS.v2, achieves an EER of 0.057% on CUVIRIS. The acquisition app, trained models, and a public subset of the dataset are released to support reproducibility. These results confirm that standardized capture and VIS-adapted lightweight models enable accurate and practical iris recognition on smartphones.
翻译:基于智能手机的可见光谱(VIS)虹膜识别技术仍面临诸多挑战,包括光照条件变化、虹膜色素差异以及缺乏标准化采集控制。本研究提出了一种紧凑的端到端处理流程,在采集阶段强制执行ISO/IEC 29794-6质量标准,并论证了在商用设备上实现精确可见光谱虹膜识别的可行性。通过开发定制化Android应用程序实现实时取景、清晰度评估与反馈机制,我们构建了包含47位受试者752张合规图像的CUVIRIS数据集。研究设计了基于MobileNetV3的轻量化多任务分割网络(LightIrisNet)以实现高效的端侧处理,并针对可见光谱领域适配了Transformer匹配器(IrisFormer)。在标准化协议下与现有CNN基线进行对比测试,OSIRIS系统在FAR=0.01时达到97.9%的TAR(EER=0.76%),而仅使用UBIRIS.v2训练的IrisFormer在CUVIRIS数据集上实现了0.057%的EER。本研究公开了采集应用程序、训练模型及数据集公开子集以支持可复现性。实验结果表明,标准化采集流程与适配可见光谱的轻量化模型能够实现智能手机上准确且实用的虹膜识别。