Test generation has been a critical and labor-intensive process in hardware design verification. Recently, the emergence of Large Language Model (LLM) with their advanced understanding and inference capabilities, has introduced a novel approach. In this work, we investigate the integration of LLM into the Coverage Directed Test Generation (CDG) process, where the LLM functions as a Verilog Reader. It accurately grasps the code logic, thereby generating stimuli that can reach unexplored code branches. We compare our framework with random testing, using our self-designed Verilog benchmark suite. Experiments demonstrate that our framework outperforms random testing on designs within the LLM's comprehension scope. Our work also proposes prompt engineering optimizations to augment LLM's understanding scope and accuracy.
翻译:测试生成一直是硬件设计验证中关键且劳动密集型的过程。近期,大语言模型(LLM)凭借其先进的理解与推理能力,为这一领域引入了创新方法。本研究探索将LLM集成至覆盖率导向测试生成(CDG)流程中,使其作为Verilog代码解析器发挥作用。该模型能够精准理解代码逻辑,从而生成可覆盖未探索代码分支的激励信号。我们使用自主设计的Verilog基准测试套件,将本框架与随机测试进行对比。实验表明,对于LLM理解范围内的设计,本框架性能优于随机测试。本研究还提出了提示工程优化方案,以扩展LLM的理解范围并提升其准确性。