Gradient-based optimization methods are commonly used to identify local optima in high-dimensional spaces. When derivatives cannot be evaluated directly, stochastic estimators can provide approximate gradients. However, these estimators' perturbation-based sampling of the objective function introduces variance that can lead to slow convergence. In this paper, we present dimensional peeking, a variance reduction method for gradient estimation in discrete optimization via simulation. By lifting the sampling granularity from scalar values to classes of values that follow the same control flow path, we increase the information gathered per simulation evaluation. Our derivation from an established smoothed gradient estimator shows that the method does not introduce any bias. We present an implementation via a custom numerical data type to transparently carry out dimensional peeking over C++ programs. Variance reductions by factors of up to 7.9 are observed for three simulation-based optimization problems with high-dimensional input. The optimization progress compared to three meta-heuristics shows that dimensional peeking increases the competitiveness of zeroth-order optimization for discrete and non-convex simulations.
翻译:基于梯度的优化方法常用于识别高维空间中的局部最优解。当无法直接计算导数时,随机估计器可提供近似梯度。然而,这些估计器基于目标函数扰动的采样方式会引入方差,可能导致收敛速度缓慢。本文提出维度窥探方法——一种面向离散仿真优化的梯度估计方差缩减技术。通过将采样粒度从标量值提升至遵循相同控制流路径的数值类别,我们增加了每次仿真评估所获取的信息量。从经典平滑梯度估计器出发的理论推导表明,该方法不会引入任何偏差。我们通过自定义数值数据类型实现了该技术,可在C++程序中透明地执行维度窥探操作。在三个高维输入的仿真优化问题中,观测到方差最高降低7.9倍。与三种元启发式算法的优化进程对比表明,维度窥探显著提升了零阶优化方法在离散非凸仿真问题中的竞争力。