Recent advancements in large language models (LLMs) have catalyzed significant interest in the automatic generation of Register-Transfer Level (RTL) code, particularly Verilog, from natural language instructions. While commercial LLMs like ChatGPT have dominated this domain, open-source alternatives have lagged considerably in performance, limiting the flexibility and data privacy of this emerging technology. This study introduces a novel approach utilizing reinforcement learning with golden code feedback to enhance the performance of pre-trained models. Leveraging open-source data and base models, we have achieved state-of-the-art (SOTA) results with a substantial margin. Notably, our 6.7B parameter model \ours{} demonstrates superior performance compared to current best-in-class 13B and 16B models. Furthermore, through a comprehensive analysis of the limitations in direct fine-tuning and the training dynamics of reinforcement learning, we posit that the development of comprehensive supervisory signals, which are align with the inherent parallel semantics of Verilog code, is critical to effective generation. The code and data associated with this research are publicly available at \url{https://github.com/CatIIIIIIII/veriseek}. The model weights can be accessed at \url{https://huggingface.co/WANGNingroci/VeriSeek}.
翻译:近年来,大语言模型(LLMs)的进展极大地推动了从自然语言指令自动生成寄存器传输级(RTL)代码(尤其是Verilog)的研究。尽管像ChatGPT这样的商业大语言模型在该领域占据主导地位,但开源替代模型的性能却显著落后,限制了这项新兴技术的灵活性和数据隐私性。本研究引入了一种利用强化学习结合黄金代码反馈的新方法,以提升预训练模型的性能。通过利用开源数据和基础模型,我们以显著优势取得了最先进的(SOTA)结果。值得注意的是,我们拥有67亿参数的模型\ours{},其性能优于当前最佳的130亿和160亿参数模型。此外,通过对直接微调的局限性以及强化学习训练动态的全面分析,我们认为,开发与Verilog代码固有的并行语义相一致的全面监督信号,对于实现有效生成至关重要。本研究的代码和数据已在\url{https://github.com/CatIIIIIIII/veriseek}公开。模型权重可在\url{https://huggingface.co/WANGNingroci/VeriSeek}获取。