This paper investigates the use of Large Language Models (LLMs) for automating the generation of hardware description code, aiming to explore their potential in supporting and enhancing the development of efficient neuromorphic computing architectures. Building on our prior work, we employ OpenAI's ChatGPT4 and natural language prompts to synthesize a RTL Verilog module of a programmable recurrent spiking neural network, while also generating test benches to assess the system's correctness. The resultant design was validated in three case studies, the exclusive OR,the IRIS flower classification and the MNIST hand-written digit classification, achieving accuracies of up to 96.6%. To verify its synthesizability and implementability, the design was prototyped on a field-programmable gate array and implemented on SkyWater 130 nm technology by using an open-source electronic design automation flow. Additionally, we have submitted it to Tiny Tapeout 6 chip fabrication program to further evaluate the system on-chip performance in the future.
翻译:本文研究了利用大型语言模型自动生成硬件描述代码的方法,旨在探索其在支持和增强高效神经形态计算架构开发方面的潜力。基于先前工作,我们采用OpenAI的ChatGPT4与自然语言提示,综合生成了可编程递归脉冲神经网络的RTL Verilog模块,并同时生成测试平台以验证系统正确性。所设计系统通过三个案例研究进行了验证:异或逻辑、鸢尾花分类及MNIST手写数字分类,最高准确率达96.6%。为验证其可综合性与可实现性,该设计在FPGA上完成了原型验证,并采用开源电子设计自动化流程在SkyWater 130纳米工艺上实现。此外,我们已将该设计提交至Tiny Tapeout 6芯片流片计划,以在未来进一步评估系统片上性能。