Language, a prominent human ability to express through sequential symbols, has been computationally mastered by recent advances of large language models (LLMs). By predicting the next word recurrently with huge neural models, LLMs have shown unprecedented capabilities in understanding and reasoning. Circuit, as the "language" of electronic design, specifies the functionality of an electronic device by cascade connections of logic gates. Then, can circuits also be mastered by a a sufficiently large "circuit model", which can conquer electronic design tasks by simply predicting the next logic gate? In this work, we take the first step to explore such possibilities. Two primary barriers impede the straightforward application of LLMs to circuits: their complex, non-sequential structure, and the intolerance of hallucination due to strict constraints (e.g., equivalence). For the first barrier, we encode a circuit as a memory-less, depth-first traversal trajectory, which allows Transformer-based neural models to better leverage its structural information, and predict the next gate on the trajectory as a circuit model. For the second barrier, we introduce an equivalence-preserving decoding process, which ensures that every token in the generated trajectory adheres to the specified equivalence constraints. Moreover, the circuit model can also be regarded as a stochastic policy to tackle optimization-oriented circuit design tasks. Experimentally, we trained a Transformer-based model of 88M parameters, named "Circuit Transformer", which demonstrates impressive performance in end-to-end logic synthesis. With Monte-Carlo tree search, Circuit Transformer significantly improves over resyn2 while retaining strict equivalence, showcasing the potential of generative AI in conquering electronic design challenges.
翻译:语言作为人类通过顺序符号进行表达的核心能力,已通过大型语言模型(LLMs)的最新进展在计算层面得以掌握。通过利用巨型神经模型循环预测下一个词汇,LLMs在理解与推理方面展现出前所未有的能力。电路作为电子设计的"语言",通过逻辑门的级联连接定义电子设备的功能。那么,能否通过一个足够庞大的"电路模型"同样掌握电路设计,仅通过预测下一个逻辑门就能攻克电子设计任务?在本工作中,我们迈出了探索这种可能性的第一步。将LLMs直接应用于电路面临两大主要障碍:电路复杂且非顺序化的结构,以及因严格约束(如等价性)而无法容忍的幻觉问题。针对第一个障碍,我们将电路编码为无记忆的深度优先遍历轨迹,使基于Transformer的神经模型能更好利用其结构信息,并沿轨迹预测下一个门极作为电路模型。针对第二个障碍,我们引入保等价解码过程,确保生成轨迹中的每个令牌均遵守指定的等价约束。此外,该电路模型还可被视为一种随机策略,用于处理面向优化的电路设计任务。实验结果表明,我们训练了名为"Circuit Transformer"的8800万参数Transformer模型,在端到端逻辑综合中展现出卓越性能。通过蒙特卡洛树搜索,Circuit Transformer在严格保持等价性的同时显著超越resyn2基线,彰显了生成式AI攻克电子设计难题的潜力。