We present DafnyPro, an inference-time framework that enhances LLMs for generating verification annotations in Dafny. DafnyPro comprises three key components: a diff-checker that prevents modifications to base program logic, a pruner that removes unnecessary invariants, and a hint-augmentation system that retrieves and applies predefined, problem-independent proof strategies. We evaluate DafnyPro using Claude Sonnet 3.5 and 3.7 on four benchmarks: Clover, MBPP-Dafny, HumanEval-Dafny, and DafnyBench, achieving consistent performance gains in all cases. Notably, on DafnyBench, the most challenging benchmark, Claude Sonnet 3.5 enhanced with DafnyPro achieves 86% correct proofs, a 16 pp improvement over the base model. We also fine-tune two Qwen models on training data derived from verification attempts by larger models enhanced with DafnyPro. Our 7B and 14B models achieve 68% and 70% correct proofs on DafnyBench, respectively, demonstrating that smaller models can maintain high verification accuracy.
翻译:本文提出DafnyPro——一种推理时框架,用于增强大型语言模型生成Dafny验证注解的能力。该框架包含三个核心组件:防止基础程序逻辑被修改的差异检查器、消除不必要不变式的剪枝器,以及检索并应用预定义且与问题无关的证明策略的提示增强系统。我们使用Claude Sonnet 3.5和3.7模型在四个基准测试集(Clover、MBPP-Dafny、HumanEval-Dafny和DafnyBench)上评估DafnyPro,在所有测试中均取得稳定的性能提升。值得注意的是,在最具挑战性的DafnyBench基准测试中,经DafnyPro增强的Claude Sonnet 3.5模型实现了86%的正确证明率,较基础模型提升16个百分点。此外,我们利用经DafnyPro增强的大型模型在验证尝试中产生的训练数据,对两个Qwen模型进行微调。实验表明,7B和14B参数量模型在DafnyBench上分别达到68%和70%的正确证明率,这证明较小规模的模型同样能保持较高的验证准确性。