Global placement, a critical step in designing the physical layout of computer chips, is essential to optimize chip performance. Prior global placement methods optimize each circuit design individually from scratch. Their neglect of transferable knowledge limits solution efficiency and chip performance as circuit complexity drastically increases. This study presents TransPlace, a global placement framework that learns to place millions of mixed-size cells in continuous space. TransPlace introduces i) Netlist Graph to efficiently model netlist topology, ii) Cell-flow and relative position encoding to learn SE(2)-invariant representation, iii) a tailored graph neural network architecture for informed parameterization of placement knowledge, and iv) a two-stage strategy for coarse-to-fine placement. Compared to state-of-the-art placement methods, TransPlace-trained on a few high-quality placements-can place unseen circuits with 1.2x speedup while reducing congestion by 30%, timing by 9%, and wirelength by 5%.
翻译:全局布局是计算机芯片物理设计中的关键步骤,对优化芯片性能至关重要。现有的全局布局方法通常针对每个电路设计从头开始独立优化。随着电路复杂度急剧增加,这些方法对可迁移知识的忽视限制了求解效率与芯片性能。本研究提出TransPlace,一种能够学习在连续空间中布局数百万混合尺寸单元的全局布局框架。TransPlace包含以下创新:i) 采用网表图高效建模网表拓扑结构;ii) 通过单元流与相对位置编码学习SE(2)不变表示;iii) 设计专用的图神经网络架构以实现布局知识的参数化建模;iv) 提出从粗到精的两阶段布局策略。与现有先进布局方法相比,仅需在少量高质量布局样本上训练的TransPlace,能够对未见电路实现1.2倍的加速布局,同时将拥塞降低30%、时序改善9%、线长减少5%。