Biometric systems based on brain activity have been proposed as an alternative to passwords or to complement current authentication techniques. By leveraging the unique brainwave patterns of individuals, these systems offer the possibility of creating authentication solutions that are resistant to theft, hands-free, accessible, and potentially even revocable. However, despite the growing stream of research in this area, faster advance is hindered by reproducibility problems. Issues such as the lack of standard reporting schemes for performance results and system configuration, or the absence of common evaluation benchmarks, make comparability and proper assessment of different biometric solutions challenging. Further, barriers are erected to future work when, as so often, source code is not published open access. To bridge this gap, we introduce NeuroBench, a flexible open source tool to benchmark brainwave-based authentication models. It incorporates nine diverse datasets, implements a comprehensive set of pre-processing parameters and machine learning algorithms, enables testing under two common adversary models (known vs unknown attacker), and allows researchers to generate full performance reports and visualizations. We use NeuroBench to investigate the shallow classifiers and deep learning-based approaches proposed in the literature, and to test robustness across multiple sessions. We observe a 37.6% reduction in Equal Error Rate (EER) for unknown attacker scenarios (typically not tested in the literature), and we highlight the importance of session variability to brainwave authentication. All in all, our results demonstrate the viability and relevance of NeuroBench in streamlining fair comparisons of algorithms, thereby furthering the advancement of brainwave-based authentication through robust methodological practices.
翻译:基于脑电活动的生物特征识别系统已被提出作为密码的替代方案或现有认证技术的补充。通过利用个体独特的脑电波模式,这类系统有望构建出抗窃取、免手持、易访问甚至可撤销的认证方案。然而,尽管该领域研究日益增多,可重复性问题仍制约着其快速发展。性能结果与系统配置缺乏标准化报告方案、通用评估基准缺失等问题,导致不同生物特征识别解决方案难以进行有效比较与合理评估。更常见的是,由于源代码未以开放获取形式发布,后续研究面临重重障碍。为弥补这一空白,我们提出NeuroBench——一个用于脑电波认证模型基准测试的开源灵活工具。该工具整合了九个多样化数据集,实现了全面的预处理参数与机器学习算法集合,支持在两种常见对抗模型(已知攻击者与未知攻击者)下进行测试,并允许研究人员生成完整的性能报告与可视化结果。我们利用NeuroBench对文献中提出的浅层分类器与深度学习方法进行研究,并跨会话测试其鲁棒性。研究发现:在未知攻击者场景(通常未被文献测试)中,等错误率(EER)降低37.6%;同时揭示了会话变异性对脑电波认证的关键影响。总而言之,我们的研究结果验证了NeuroBench在推动算法公平比较方面的可行性与重要性,从而通过严谨的方法论实践促进脑电波认证技术的进一步发展。