Neurosymbolic approaches leveraging Large Language Models (LLMs) with formal methods have recently achieved strong results on mathematics-oriented theorem-proving benchmarks. However, success on competition-style mathematics does not by itself demonstrate the ability to construct proofs about real-world implementations. We address this gap with a benchmark derived from an industrial cryptographic library whose assembly routines are already verified in HOL Light. s2n-bignum is a library used at AWS for providing fast assembly routines for cryptography, and its correctness is established by formal verification. The task of formally verifying this library has been a significant achievement for the Automated Reasoning Group. It involved two tasks: (1) precisely specifying the correct behavior of a program as a mathematical proposition, and (2) proving that the proposition is correct. In the case of s2n-bignum, both tasks were carried out by human experts. In \textit{s2n-bignum-bench}, we provide the formal specification and ask the LLM to generate a proof script that is accepted by HOL Light within a fixed proof-check timeout. To our knowledge, \textit{s2n-bignum-bench} is the first public benchmark focused on machine-checkable proof synthesis for industrial low-level cryptographic assembly routines in HOL Light. This benchmark provides a challenging and practically relevant testbed for evaluating LLM-based theorem proving beyond competition mathematics. The code to set up and use the benchmark is available here: \href{https://github.com/kings-crown/s2n-bignum-bench}{s2n-bignum-bench}.
翻译:近年来,结合大语言模型(LLMs)与形式化方法的神经符号方法在面向数学的定理证明基准测试中取得了强劲成果。然而,在竞赛风格数学问题上的成功本身并不能证明其具备对现实世界实现进行构造性证明的能力。我们通过一个源自工业密码学库的基准来填补这一空白,该库的汇编例程已在HOL Light中得到验证。s2n-bignum是AWS用于提供快速密码学汇编例程的库,其正确性通过形式化验证确立。对该库进行形式化验证是自动推理组的一项重大成就,它涉及两项任务:(1) 将程序的正确行为精确地规约为数学命题,(2) 证明该命题的正确性。在s2n-bignum的案例中,这两项任务均由人类专家完成。在\textit{s2n-bignum-bench}中,我们提供形式化规约,并要求LLM生成能在固定证明检查时限内被HOL Light接受的证明脚本。据我们所知,\textit{s2n-bignum-bench}是首个专注于在HOL Light中为工业级低级密码学汇编例程合成机器可检查证明的公开基准。该基准为评估超越竞赛数学的、基于LLM的定理证明能力提供了一个具有挑战性且实际相关的测试平台。用于设置和使用该基准的代码可在此处获取:\href{https://github.com/kings-crown/s2n-bignum-bench}{s2n-bignum-bench}。