Binary Neural Networks (BNNs) offer a low-complexity and energy-efficient alternative to traditional full-precision neural networks by constraining their weights and activations to binary values. However, their discrete, highly non-linear behavior makes them difficult to explain, validate and formally verify. As a result, BNNs remain largely opaque, limiting their suitability in safety-critical domains, where causal transparency and behavioral guarantees are essential. In this work, we introduce a Petri net (PN)-based framework that captures the BNN's internal operations as event-driven processes. By "eventizing" their operations, we expose their causal relationships and dependencies for a fine-grained analysis of concurrency, ordering, and state evolution. Here, we construct modular PN blueprints for core BNN components including activation, gradient computation and weight updates, and compose them into a complete system-level model. We then validate the composed PN against a reference software-based BNN, verify it against reachability and structural checks to establish 1-safeness, deadlock-freeness, mutual exclusion and correct-by-construction causal sequencing, before we assess its scalability and complexity at segment, component, and system levels using the automated measurement tools in Workcraft. Overall, this framework enables causal introspection of transparent and event-driven BNNs that are amenable to formal reasoning and verification.
翻译:二元神经网络(BNNs)通过将权重和激活值约束为二元数值,为传统全精度神经网络提供了一种低复杂度且高能效的替代方案。然而,其离散且高度非线性的行为使其难以解释、验证和形式化验证。因此,BNNs在很大程度上仍是不透明的,这限制了其在安全关键领域(其中因果透明性和行为保证至关重要)的适用性。在本研究中,我们提出了一种基于Petri网(PN)的框架,该框架将BNN的内部操作捕获为事件驱动过程。通过对其操作进行“事件化”,我们揭示了其因果关系和依赖性,从而支持对并发性、顺序性和状态演变的细粒度分析。在此,我们为BNN核心组件(包括激活、梯度计算和权重更新)构建了模块化的PN蓝图,并将其组合成一个完整的系统级模型。随后,我们将组合后的PN与基于软件的参考BNN进行验证,并通过可达性和结构检查来验证其1-安全性、无死锁性、互斥性以及构造正确的因果顺序,然后使用Workcraft中的自动化测量工具在分段、组件和系统级别评估其可扩展性和复杂性。总体而言,该框架实现了对透明且事件驱动的BNNs的因果内省,使其适用于形式化推理和验证。