Analog layout synthesis faces significant challenges due to its dependence on manual processes, considerable time requirements, and performance instability. Current Bayesian Optimization (BO)-based techniques for analog layout synthesis, despite their potential for automation, suffer from slow convergence and extensive data needs, limiting their practical application. This paper presents the \texttt{LLANA} framework, a novel approach that leverages Large Language Models (LLMs) to enhance BO by exploiting the few-shot learning abilities of LLMs for more efficient generation of analog design-dependent parameter constraints. Experimental results demonstrate that \texttt{LLANA} not only achieves performance comparable to state-of-the-art (SOTA) BO methods but also enables a more effective exploration of the analog circuit design space, thanks to LLM's superior contextual understanding and learning efficiency. The code is available at https://github.com/dekura/LLANA.
翻译:模拟版图综合因其对人工流程的依赖、耗时较长以及性能不稳定而面临重大挑战。当前基于贝叶斯优化(BO)的模拟版图综合技术虽具备自动化潜力,但存在收敛速度慢、数据需求量大等问题,限制了其实际应用。本文提出\texttt{LLANA}框架,这是一种创新方法,通过利用大型语言模型(LLMs)的少样本学习能力来增强BO,从而更高效地生成与模拟设计相关的参数约束。实验结果表明,\texttt{LLANA}不仅实现了与最先进(SOTA)BO方法相当的性能,而且得益于LLM卓越的上下文理解与学习效率,能够更有效地探索模拟电路设计空间。代码发布于https://github.com/dekura/LLANA。