Smartphone-based iris recognition in the visible spectrum (VIS) remains difficult due to illumination variability, pigmentation differences, and the absence of standardized capture controls. This work presents a compact end-to-end pipeline that enforces ISO/IEC 29794-6 quality compliance at acquisition and demonstrates that accurate VIS iris recognition is feasible on commodity devices. Using a custom Android application performing real-time framing, sharpness evaluation, and feedback, we introduce the CUVIRIS dataset of 752 compliant images from 47 subjects. A lightweight MobileNetV3-based multi-task segmentation network (LightIrisNet) is developed for efficient on-device processing, and a transformer matcher (IrisFormer) is adapted to the VIS domain. Under a standardized protocol and comparative benchmarking against prior CNN baselines, OSIRIS attains a TAR of 97.9% at FAR=0.01 (EER=0.76%), while IrisFormer, trained only on UBIRIS.v2, achieves an EER of 0.057% on CUVIRIS. The acquisition app, trained models, and a public subset of the dataset are released to support reproducibility. These results confirm that standardized capture and VIS-adapted lightweight models enable accurate and practical iris recognition on smartphones.
翻译:基于智能手机的可见光谱(VIS)虹膜识别,由于光照条件多变、色素沉着差异以及缺乏标准化采集控制,仍然面临挑战。本研究提出了一种紧凑的端到端流程,在采集阶段强制遵循ISO/IEC 29794-6质量标准,并论证了在商用设备上实现精确的可见光谱虹膜识别是可行的。通过一款执行实时取景、清晰度评估与反馈的自定义Android应用程序,我们引入了CUVIRIS数据集,包含来自47位受试者的752张合规图像。我们开发了一个基于MobileNetV3的轻量级多任务分割网络(LightIrisNet)以实现高效的设备端处理,并针对可见光谱领域适配了一个Transformer匹配器(IrisFormer)。在标准化协议下,并与先前的CNN基线进行对比基准测试,OSIRIS在FAR=0.01时达到了97.9%的TAR(EER=0.76%),而仅在UBIRIS.v2上训练的IrisFormer在CUVIRIS上实现了0.057%的EER。采集应用程序、训练模型以及数据集的公开子集均已发布以支持可复现性。这些结果证实,标准化的采集流程和针对可见光谱优化的轻量级模型,能够在智能手机上实现精确且实用的虹膜识别。