Circuit representation learning is increasingly pivotal in Electronic Design Automation (EDA), serving various downstream tasks with enhanced model efficiency and accuracy. One notable work, DeepSeq, has pioneered sequential circuit learning by encoding temporal correlations. However, it suffers from significant limitations including prolonged execution times and architectural inefficiencies. To address these issues, we introduce DeepSeq2, a novel framework that enhances the learning of sequential circuits, by innovatively mapping it into three distinct embedding spaces-structure, function, and sequential behavior-allowing for a more nuanced representation that captures the inherent complexities of circuit dynamics. By employing an efficient Directed Acyclic Graph Neural Network (DAG-GNN) that circumvents the recursive propagation used in DeepSeq, DeepSeq2 significantly reduces execution times and improves model scalability. Moreover, DeepSeq2 incorporates a unique supervision mechanism that captures transitioning behaviors within circuits more effectively. DeepSeq2 sets a new benchmark in sequential circuit representation learning, outperforming prior works in power estimation and reliability analysis.
翻译:电路表示学习在电子设计自动化(EDA)领域中日益关键,能够通过提升模型效率与精度服务于多种下游任务。代表性工作DeepSeq开创了时序电路学习的先河,其通过编码时序相关性实现电路建模。然而,该方法存在显著局限,包括执行时间过长与架构效率低下。为解决这些问题,我们提出DeepSeq2——一种创新的时序电路学习框架,其通过将电路映射至三个独立的嵌入空间(结构、功能与时序行为),构建能够捕捉电路动态内在复杂性的精细化表示。通过采用高效的定向无环图神经网络(DAG-GNN)规避DeepSeq中的递归传播机制,DeepSeq2显著缩短了执行时间并提升了模型可扩展性。此外,DeepSeq2引入独特的监督机制,能更有效地捕捉电路中的状态转移行为。DeepSeq2在时序电路表示学习领域树立了新的标杆,在功耗估计与可靠性分析任务中均超越现有方法。