Recently, AI research has primarily focused on large language models (LLMs), and increasing accuracy often involves scaling up and consuming more power. The power consumption of AI has become a significant societal issue; in this context, spiking neural networks (SNNs) offer a promising solution. SNNs operate event-driven, like the human brain, and compress information temporally. These characteristics allow SNNs to significantly reduce power consumption compared to perceptron-based artificial neural networks (ANNs), highlighting them as a next-generation neural network technology. However, societal concerns regarding AI go beyond power consumption, with the reliability of AI models being a global issue. For instance, adversarial attacks on AI models are a well-studied problem in the context of traditional neural networks. Despite their importance, the stability and property verification of SNNs remains in the early stages of research. Most SNN verification methods are time-consuming and barely scalable, making practical applications challenging. In this paper, we introduce temporal encoding to achieve practical performance in verifying the adversarial robustness of SNNs. We conduct a theoretical analysis of this approach and demonstrate its success in verifying SNNs at previously unmanageable scales. Our contribution advances SNN verification to a practical level, facilitating the safer application of SNNs.
翻译:近年来,人工智能研究主要聚焦于大语言模型(LLMs),而提升精度往往需要扩大模型规模并消耗更多算力。人工智能的能耗已成为一个重要的社会议题;在此背景下,脉冲神经网络(SNNs)提供了一种颇具前景的解决方案。SNNs 采用类人脑的事件驱动机制,并在时间维度上压缩信息。这些特性使得 SNNs 相较于基于感知机的人工神经网络(ANNs)能显著降低能耗,凸显了其作为下一代神经网络技术的潜力。然而,社会对人工智能的关切不仅限于能耗问题,AI 模型的可靠性已成为全球性议题。例如,针对 AI 模型的对抗攻击在传统神经网络领域已是深入研究的问题。尽管至关重要,SNNs 的稳定性与性质验证仍处于研究初期。现有的大多数 SNN 验证方法耗时严重且可扩展性不足,难以实际应用。本文引入时序编码技术,旨在实现 SNN 对抗鲁棒性验证的实际性能。我们对该方法进行了理论分析,并成功验证了此前难以处理的大规模 SNNs。本研究的贡献将 SNN 验证推进至实用层面,有助于推动 SNNs 的更安全应用。