Optimizing costly black-box functions within a constrained evaluation budget presents significant challenges in many real-world applications. Surrogate Optimization (SO) is a common resolution, yet its proprietary nature introduced by the complexity of surrogate models and the sampling core (e.g., acquisition functions) often leads to a lack of explainability and transparency. While existing literature has primarily concentrated on enhancing convergence to global optima, the practical interpretation of newly proposed strategies remains underexplored, especially in batch evaluation settings. In this paper, we propose \emph{Inclusive} Explainability Metrics for Surrogate Optimization (IEMSO), a comprehensive set of model-agnostic metrics designed to enhance the transparency, trustworthiness, and explainability of the SO approaches. Through these metrics, we provide both intermediate and post-hoc explanations to practitioners before and after performing expensive evaluations to gain trust. We consider four primary categories of metrics, each targeting a specific aspect of the SO process: Sampling Core Metrics, Batch Properties Metrics, Optimization Process Metrics, and Feature Importance. Our experimental evaluations demonstrate the significant potential of the proposed metrics across different benchmarks.
翻译:在有限的评估预算内优化代价高昂的黑箱函数是许多实际应用中的重大挑战。代理优化是一种常见的解决方案,然而其代理模型的复杂性以及采样核心(如采集函数)所引入的专有特性,往往导致可解释性与透明度的缺失。现有文献主要集中于提升向全局最优解的收敛性,而新提出策略的实际可解释性,特别是在批量评估场景下,仍未得到充分探索。本文提出了用于代理优化的包容性可解释性度量,这是一套全面的、与模型无关的度量标准,旨在增强代理优化方法的透明度、可信度与可解释性。通过这些度量,我们能够在执行昂贵的评估前后,为实践者提供过程解释与事后解释,从而建立信任。我们考虑了四大类度量,每一类都针对代理优化过程的特定方面:采样核心度量、批量属性度量、优化过程度量以及特征重要性度量。我们的实验评估证明了所提出度量在不同基准测试中的巨大潜力。