For performance and verification in machine learning, new methods have recently been proposed that optimise learning systems to satisfy formally expressed logical properties. Among these methods, differentiable logics (DLs) are used to translate propositional or first-order formulae into loss functions deployed for optimisation in machine learning. At the same time, recent attempts to give programming language support for verification of neural networks showed that DLs can be used to compile verification properties to machine-learning backends. This situation is calling for stronger guarantees about the soundness of such compilers, the soundness and compositionality of DLs, and the differentiability and performance of the resulting loss functions. In this paper, we propose an approach to formalise existing DLs using the Mathematical Components library in the Coq proof assistant. Thanks to this formalisation, we are able to give uniform semantics to otherwise disparate DLs, give formal proofs to existing informal arguments, find errors in previous work, and provide formal proofs to missing conjectured properties. This work is meant as a stepping stone for the development of programming language support for verification of machine learning.
翻译:在机器学习的性能与验证领域,近期提出的新方法旨在优化学习系统以满足形式化表述的逻辑性质。其中,可微逻辑(DLs)被用于将命题或一阶逻辑公式转换为可在机器学习优化中部署的损失函数。与此同时,近期为神经网络验证提供编程语言支持的尝试表明,DLs可用于将验证性质编译至机器学习后端。这一现状要求对此类编译器的可靠性、DLs的可靠性与组合性,以及生成损失函数的可微性与性能提供更强保证。本文提出一种利用Coq证明助手中的数学组件库对现有DLs进行形式化的方法。通过该形式化工作,我们能够为原本分散的DLs提供统一语义,为现有非形式化论证提供形式化证明,发现前人工作中的错误,并对缺失的猜想性质提供形式化证明。本工作旨在为开发支持机器学习验证的编程语言奠定基础。