Program verification relies on loop invariants, yet automatically discovering strong invariants remains a long-standing challenge. We investigate whether large language models (LLMs) can accelerate program verification by generating useful loop invariants. We introduce Quokka, a first-order and effective framework for LLM-based invariant synthesis that provides sound evaluation while achieving state-of-the-art speedup results. Unlike prior work that designs complex, highly customized algorithms, Quokka employs a simple and principled verification procedure. We construct a benchmark of 866 instances and evaluate 9 state-of-the-art LLMs across multiple model families. Our results show that Quokka consistently outperforms all prior LLM-based verifiers: achieving speedups of at least 1.2x on 81 instances compared to 39 instances for the previous best approach. We further demonstrate that supervised fine-tuning and Best-of-N sampling can yield measurable improvements in accelerating verification.
翻译:程序验证依赖于循环不变式,然而自动发现强不变式仍是一个长期存在的挑战。我们研究大型语言模型(LLM)是否能够通过生成有用的循环不变式来加速程序验证。我们提出了Quokka,这是一个基于LLM的不变式合成的一阶高效框架,它在实现最先进加速结果的同时提供了可靠评估。与先前设计复杂、高度定制化算法的工作不同,Quokka采用了一种简单且原则性的验证流程。我们构建了一个包含866个实例的基准测试集,并在多个模型系列中评估了9种最先进的LLM。我们的结果表明,Quokka在性能上始终优于所有先前的基于LLM的验证器:在81个实例上实现了至少1.2倍的加速,而先前最佳方法仅在39个实例上达到同等加速。我们进一步证明,监督微调和Best-of-N采样能够带来可测量的验证加速改进。