Proof engineering is notoriously labor-intensive: proofs that are straightforward on paper often require lengthy scripts in theorem provers. Recent advances in large language models (LLMs) create new opportunities for proof automation: modern LLMs not only generate proof scripts, but also support agentic behavior, exploring codebases and iteratively refining their outputs against prover feedback. These advances enable an emerging scheme where LLM-based agents undertake most proof engineering under human guidance. Humans provide mathematical insight (definitions, theorems, proof strategies); agents handle the mechanical work of proof development. We call this scheme agentic proof automation. We present this scheme through a case study: mechanizing the semantic type soundness of a sophisticated formal system, System Capless, in Lean 4, comprising over 14,000 lines of code. Using off-the-shelf LLM agents with a single lightweight proof-checking tool, the agents completed 189 proof engineering tasks with an 87% success rate, only 16% requiring human intervention. The case study demonstrates that agents are capable proof engineers that substantially boost productivity, though they fall short in creative reasoning and still require human guidance in certain cases. We release an interactive explorer where readers can examine all agent interactions; the mechanization is open-sourced for experiments and extensions.
翻译:证明工程向来以劳动密集著称:在纸上看似简单的证明,在定理证明器中往往需要冗长的脚本。大型语言模型(LLM)的最新进展为证明自动化创造了新的机遇:现代LLM不仅能生成证明脚本,还支持智能体行为,能够探索代码库并根据证明器反馈迭代优化输出。这些进展催生了一种新兴模式,即基于LLM的智能体在人类指导下承担大部分证明工程工作。人类提供数学洞察(定义、定理、证明策略),智能体则处理证明开发的机械性工作。我们将这种模式称为智能证明自动化。本文通过一项案例研究来阐述该模式:在Lean 4中形式化复杂系统System Capless的语义类型可靠性,该形式化代码超过14,000行。使用现成的LLM智能体配合单一轻量级证明检查工具,智能体完成了189项证明工程任务,成功率达87%,其中仅16%需要人工干预。案例研究表明,智能体是具备较强能力的证明工程师,能显著提升生产效率,但在创造性推理方面仍有不足,某些情况下仍需人类指导。我们发布了交互式探索器供读者查阅所有智能体交互记录;该形式化代码已开源,可供实验与扩展。