In digital circuit design, testbenches constitute the cornerstone of simulation-based hardware verification. Traditional methodologies for testbench generation during simulation-based hardware verification still remain partially manual, resulting in inefficiencies in testing various scenarios and requiring expensive time from designers. Large Language Models (LLMs) have demonstrated their potential in automating the circuit design flow. However, directly applying LLMs to generate testbenches suffers from a low pass rate. To address this challenge, we introduce AutoBench, the first LLM-based testbench generator for digital circuit design, which requires only the description of the design under test (DUT) to automatically generate comprehensive testbenches. In AutoBench, a hybrid testbench structure and a self-checking system are realized using LLMs. To validate the generated testbenches, we also introduce an automated testbench evaluation framework to evaluate the quality of generated testbenches from multiple perspectives. Experimental results demonstrate that AutoBench achieves a 57% improvement in the testbench pass@1 ratio compared with the baseline that directly generates testbenches using LLMs. For 75 sequential circuits, AutoBench successfully has a 3.36 times testbench pass@1 ratio compared with the baseline. The source codes and experimental results are open-sourced at this link: https://github.com/AutoBench/AutoBench
翻译:在数字电路设计领域,测试平台构成了基于仿真的硬件验证的基石。传统基于仿真的硬件验证中的测试平台生成方法仍部分依赖人工操作,导致难以高效测试各种场景,且需要设计者投入大量时间。大语言模型(LLM)已在自动化电路设计流程中展现出潜力。然而,直接应用LLM生成测试平台存在通过率较低的问题。为应对这一挑战,我们提出了AutoBench——首个基于LLM的数字电路设计测试平台生成器,仅需被测设计(DUT)描述即可自动生成全面测试平台。在AutoBench中,我们利用LLM实现了混合测试平台架构与自检系统。为验证生成的测试平台,我们还引入了自动化测试平台评估框架,从多维度评估生成测试平台的质量。实验结果表明,与直接使用LLM生成测试平台的基线方法相比,AutoBench在测试平台pass@1率上实现了57%的提升。针对75个时序电路,AutoBench的测试平台pass@1率达到基线方法的3.36倍。源代码与实验结果已通过此链接开源:https://github.com/AutoBench/AutoBench