RTL implementations frequently lack up-to-date or consistent specifications, making comprehension, maintenance, and verification costly and error-prone. While prior work has explored generating specifications from RTL using large language models (LLMs), ensuring that the generated documents faithfully capture design intent remains a major challenge. We present SpecLoop, an agentic framework for RTL-to-specification generation with a formal-verification-driven iterative feedback loop. SpecLoop first generates candidate specifications and then reconstructs RTL from these specifications; it uses formal equivalence checking tools between the reconstructed RTL and the original design to validate functional consistency. When mismatches are detected, counterexamples are fed back to iteratively refine the specifications until equivalence is proven or no further progress can be made. Experiments across multiple LLMs and RTL benchmarks show that incorporating formal verification feedback substantially improves specification correctness and robustness over LLM-only baselines, demonstrating the effectiveness of verification-guided specification generation.
翻译:RTL实现常常缺乏最新或一致的规格说明,这使得理解、维护和验证过程成本高昂且容易出错。尽管先前的研究已探索使用大语言模型从RTL生成规格说明,但确保生成的文档准确捕捉设计意图仍然是一个重大挑战。我们提出了SpecLoop,一种具有形式化验证驱动的迭代反馈循环的智能RTL到规格说明生成框架。SpecLoop首先生成候选规格说明,然后根据这些规格说明重建RTL;它使用重建RTL与原始设计之间的形式等价性检查工具来验证功能一致性。当检测到不匹配时,反例会反馈给系统以迭代优化规格说明,直至证明等价性或无法取得进一步进展。在多个大语言模型和RTL基准测试上的实验表明,引入形式化验证反馈相较于仅使用大语言模型的基线方法,显著提升了规格说明的正确性和鲁棒性,这验证了验证引导的规格说明生成方法的有效性。