This paper presents a novel approach for the output range estimation problem in Deep Neural Networks (DNNs) by integrating a Simulated Annealing (SA) algorithm tailored to operate within constrained domains and ensure convergence towards global optima. The method effectively addresses the challenges posed by the lack of local geometric information and the high non-linearity inherent to DNNs, making it applicable to a wide variety of architectures, with a special focus on Residual Networks (ResNets) due to their practical importance. Unlike existing methods, our algorithm imposes minimal assumptions on the internal architecture of neural networks, thereby extending its usability to complex models. Theoretical analysis guarantees convergence, while extensive empirical evaluations-including optimization tests involving functions with multiple local minima-demonstrate the robustness of our algorithm in navigating non-convex response surfaces. The experimental results highlight the algorithm's efficiency in accurately estimating DNN output ranges, even in scenarios characterized by high non-linearity and complex constraints. For reproducibility, Python codes and datasets used in the experiments are publicly available through our GitHub repository.
翻译:本文提出了一种解决深度神经网络输出范围估计问题的新方法,该方法通过集成一种在约束域内运行并确保收敛于全局最优解的模拟退火算法来实现。该方法有效应对了深度神经网络因缺乏局部几何信息和高非线性特性所带来的挑战,使其适用于多种网络架构,并特别关注具有重要实践意义的残差网络。与现有方法不同,本算法对神经网络的内部架构施加了最小限度的假设,从而将其适用范围扩展到复杂模型。理论分析保证了算法的收敛性,而广泛的实证评估——包括涉及多局部极小值函数的优化测试——证明了本算法在非凸响应曲面中导航的鲁棒性。实验结果突显了该算法在高非线性和复杂约束场景下仍能准确估计深度神经网络输出范围的高效性。为促进可复现性,实验使用的Python代码和数据集已通过GitHub仓库公开提供。