This paper presents SimulatorCoder, an agent powered by large language models (LLMs), designed to generate and optimize deep neural network (DNN) accelerator simulators based on natural language descriptions. By integrating domain-specific prompt engineering including In-Context Learning (ICL), Chain-of-Thought (CoT) reasoning, and a multi-round feedback-verification flow, SimulatorCoder systematically transforms high-level functional requirements into efficient, executable, and architecture-aligned simulator code. Experiments based on the customized SCALE-Sim benchmark demonstrate that structured prompting and feedback mechanisms substantially improve both code generation accuracy and simulator performance. The resulting simulators not only maintain cycle-level fidelity with less than 1% error compared to manually implemented counterparts, but also consistently achieve lower simulation runtimes, highlighting the effectiveness of LLM-based methods in accelerating simulator development. Our code is available at https://github.com/xiayuhuan/SimulatorCoder.
翻译:本文提出SimulatorCoder,一种基于大语言模型(LLMs)驱动的智能体,旨在根据自然语言描述生成并优化深度神经网络(DNN)加速器模拟器。通过整合领域特定的提示工程技术,包括上下文学习(ICL)、思维链(CoT)推理以及多轮反馈-验证流程,SimulatorCoder能够系统地将高层次功能需求转化为高效、可执行且与架构对齐的模拟器代码。基于定制化SCALE-Sim基准的实验表明,结构化提示与反馈机制显著提升了代码生成准确性与模拟器性能。所生成的模拟器不仅与手动实现的版本相比保持了周期级保真度(误差小于1%),而且持续实现了更低的模拟运行时间,凸显了基于LLM的方法在加速模拟器开发中的有效性。我们的代码公开于 https://github.com/xiayuhuan/SimulatorCoder。