Analog layout synthesis faces significant challenges due to its dependence on manual processes, considerable time requirements, and performance instability. Current Bayesian Optimization (BO)-based techniques for analog layout synthesis, despite their potential for automation, suffer from slow convergence and extensive data needs, limiting their practical application. This paper presents the \texttt{LLANA} framework, a novel approach that leverages Large Language Models (LLMs) to enhance BO by exploiting the few-shot learning abilities of LLMs for more efficient generation of analog design-dependent parameter constraints. Experimental results demonstrate that \texttt{LLANA} not only achieves performance comparable to state-of-the-art (SOTA) BO methods but also enables a more effective exploration of the analog circuit design space, thanks to LLM's superior contextual understanding and learning efficiency. The code is available at \url{https://github.com/dekura/LLANA}.
翻译:模拟版图综合因其对人工流程的依赖、耗时巨大及性能不稳定等问题面临严峻挑战。当前基于贝叶斯优化(BO)的模拟版图综合技术虽具备自动化潜力,但存在收敛速度慢、数据需求量大等缺陷,制约了其实际应用。本文提出\texttt{LLANA}框架,该创新方法通过利用大语言模型(LLMs)的少样本学习能力来增强贝叶斯优化,从而更高效地生成与模拟设计相关的参数约束。实验结果表明,得益于大语言模型卓越的上下文理解与学习效率,\texttt{LLANA}不仅实现了与最先进(SOTA)贝叶斯优化方法相当的性能,还能更有效地探索模拟电路设计空间。代码发布于\url{https://github.com/dekura/LLANA}。