Rectified Linear Unit (ReLU) networks are piecewise-linear (PWL), so universal linear safety properties can be reduced to reasoning about linear constraints. Modern verifiers rely on SMT(LRA) procedures or MILP encodings, but a safety claim is only as trustworthy as the evidence it produces. We develop a proof-carrying verification core for PWL neural constraints on an input domain $D \subseteq \mathbb{R}^n$. We formalize the exact PWL semantics as a union of polyhedra indexed by activation patterns, relate this model to standard exact SMT/MILP encodings and to the canonical convex-hull (ideal) relaxation of a bounded ReLU, and introduce a small certificate calculus whose proof objects live over $\mathbb{Q}$. Two certificate types suffice for the core reasoning steps: entailment certificates validate linear consequences (bound tightening and learned cuts), while Farkas certificates prove infeasibility of strengthened counterexample queries (branch-and-bound pruning). We give an exact proof kernel that checks these artifacts in rational arithmetic, prove soundness and completeness for linear entailment, and show that infeasibility certificates admit sparse representatives depending only on dimension. Worked examples illustrate end-to-end certified reasoning without trusting the solver beyond its exported witnesses.
翻译:整流线性单元(ReLU)网络是分段线性(PWL)的,因此通用的线性安全性质可归结为对线性约束的推理。现代验证器依赖于SMT(LRA)过程或MILP编码,但安全声明的可信度仅取决于其生成的证据。我们为输入域$D \subseteq \mathbb{R}^n$上的PWL神经约束开发了一个证明携带验证核心。我们将精确的PWL语义形式化为按激活模式索引的多面体并集,将该模型与标准的精确SMT/MILP编码以及有界ReLU的典型凸包(理想)松弛相关联,并引入一个证明对象定义在$\mathbb{Q}$上的小型证书演算。两种证书类型足以支撑核心推理步骤:蕴含证书验证线性推论(边界紧缩与学习割平面),而Farkas证书证明强化反例查询的不可行性(分支定界剪枝)。我们给出一个在有理数算术中检查这些构件的精确证明内核,证明了线性蕴含的可靠性与完备性,并表明不可行性证书存在仅依赖于维度的稀疏表示。工作实例展示了端到端的可认证推理过程,其可信性仅依赖于验证器输出的见证,而无需信任求解器本身。