The growing interest in Explainable Artificial Intelligence (XAI) motivates promising studies of computing optimal Interpretable Machine Learning models, especially decision trees. Such models generally provide optimality in compact size or empirical accuracy. Recent works focus on improving efficiency due to the natural scalability issue. The application of such models to practical problems is quite limited. As an emerging problem in circuit design, Approximate Logic Synthesis (ALS) aims to reduce circuit complexity by sacrificing correctness. Recently, multiple heuristic machine learning methods have been applied in ALS, which learns approximated circuits from samples of input-output pairs. In this paper, we propose a new ALS methodology realizing the approximation via learning optimal decision trees in empirical accuracy. Compared to previous heuristic ALS methods, the guarantee of optimality achieves a more controllable trade-off between circuit complexity and accuracy. Experimental results show clear improvements in our methodology in the quality of approximated designs (circuit complexity and accuracy) compared to the state-of-the-art approaches.
翻译:随着可解释人工智能(XAI)日益受到关注,对计算最优可解释机器学习模型(尤其是决策树)的研究展现出广阔前景。此类模型通常在紧凑规模或经验精度方面提供最优性。由于固有的可扩展性问题,近期研究主要聚焦于提升效率。然而,这些模型在实际问题中的应用仍相当有限。作为电路设计领域的新兴课题,近似逻辑综合(ALS)旨在通过牺牲正确性来降低电路复杂度。近年来,多种启发式机器学习方法已被应用于ALS,通过输入-输出对的样本学习近似电路。本文提出一种新的ALS方法,通过经验精度最优的决策树学习实现电路近似。与先前的启发式ALS方法相比,最优性保证可在电路复杂度与精度之间实现更可控的权衡。实验结果表明,相较于现有先进方法,本方法在近似设计质量(电路复杂度与精度)方面均有显著提升。