As circuit designs become more intricate, obtaining accurate performance estimation in early stages, for effective design space exploration, becomes more time-consuming. Traditional logic optimization approaches often rely on proxy metrics to approximate post-mapping performance and area. However, these proxies do not always correlate well with actual post-mapping delay and area, resulting in suboptimal designs. To address this issue, we explore a ground-truth-based optimization flow that directly incorporates the exact post-mapping delay and area during optimization. While this approach improves design quality, it also significantly increases computational costs, particularly for large-scale designs. To overcome the runtime challenge, we apply machine learning models to predict post-mapping delay and area using the features extracted from AIGs. Our experimental results show that the model has high prediction accuracy with good generalization to unseen designs. Furthermore, the ML-enhanced logic optimization flow significantly reduces runtime while maintaining comparable performance and area outcomes.
翻译:随着电路设计日趋复杂,为有效进行设计空间探索而在早期阶段获取精确性能评估的过程变得愈发耗时。传统逻辑优化方法通常依赖代理指标来近似映射后性能与面积,然而这些代理指标与实际映射后延迟和面积的相关性往往不佳,导致设计结果未达最优。为解决此问题,我们探索了一种基于真实数据的优化流程,在优化过程中直接纳入精确的映射后延迟与面积信息。虽然该方法提升了设计质量,但也显著增加了计算成本,尤其在大规模设计中更为突出。为克服运行时间挑战,我们应用机器学习模型,通过从AIG中提取的特征来预测映射后延迟与面积。实验结果表明,该模型具有较高的预测精度,并对未见过的设计展现出良好的泛化能力。此外,结合机器学习的逻辑优化流程在保持可比性能与面积结果的同时,显著降低了运行时间。