Engineering change orders (ECOs) in late stages make minimal design fixes to recover from timing shifts due to excessive IR drops. This paper integrates IR-drop-aware timing analysis and ECO timing optimization using reinforcement learning (RL). The method operates after physical design and power grid synthesis, and rectifies IR-drop-induced timing degradation through gate sizing. It incorporates the Lagrangian relaxation (LR) technique into a novel RL framework, which trains a relational graph convolutional network (R-GCN) agent to sequentially size gates to fix timing violations. The R-GCN agent outperforms a classical LR-only algorithm: in an open 45nm technology, it (a) moves the Pareto front of the delay-power tradeoff curve to the left (b) saves runtime over the prior approaches by running fast inference using trained models, and (c) reduces the perturbation to placement by sizing fewer cells. The RL model is transferable across timing specifications and to unseen designs with fine tuning.
翻译:后期工程变更指令(ECO)通过最小化设计修正来恢复因过度IR压降导致的时序偏移。本文采用强化学习(RL)方法,将IR压降感知的时序分析与ECO时序优化相结合。该方法在物理设计和电源网络综合后执行,通过门级尺寸调整来修正IR压降引起的时序退化。我们将拉格朗日松弛(LR)技术整合到新型RL框架中,训练关系图卷积网络(R-GCN)智能体以顺序调整门尺寸来修复时序违规。在45nm开放工艺中,R-GCN智能体优于传统纯LR算法:(a)将延迟-功耗权衡曲线的帕累托前沿向左移动;(b)通过训练模型进行快速推理,较现有方法节省运行时间;(c)通过调整更少的单元减少对布局的扰动。该RL模型可通过微调跨时序规范迁移,并适用于未见过的设计。