We consider the design of smoothings of the (coordinate-wise) max function in $\mathbb{R}^d$ in the infinity norm. The LogSumExp function $f(x)=\ln(\sum^d_i\exp(x_i))$ provides a classical smoothing, differing from the max function in value by at most $\ln(d)$. We provide an elementary construction of a lower bound, establishing that every overestimating smoothing of the max function must differ by at least $\sim 0.8145\ln(d)$. Hence, LogSumExp is optimal up to constant factors. However, in small dimensions, we provide stronger, exactly optimal smoothings attaining our lower bound, showing that the entropy-based LogSumExp approach to smoothing is not exactly optimal.
翻译:我们考虑在无穷范数下设计$\mathbb{R}^d$空间中(坐标方向)最大值函数的平滑逼近。LogSumExp函数$f(x)=\ln(\sum^d_i\exp(x_i))$提供了一个经典平滑方法,其函数值与最大值函数的偏差不超过$\ln(d)$。我们通过初等方法构造了一个下界,证明任何对最大值函数的高估平滑逼近至少会产生$\sim 0.8145\ln(d)$的偏差。因此,LogSumExp在常数因子范围内是最优的。然而,在低维情况下,我们提出了更强、精确最优的平滑方法,达到了我们的下界,这表明基于熵的LogSumExp平滑方法并非精确最优。