The Pinsker inequality lower bounds the Kullback--Leibler divergence $D_{\textrm{KL}}$ in terms of total variation and provides a canonical way to convert $D_{\textrm{KL}}$ control into $\lVert \cdot \rVert_1$-control. Motivated by applications to probabilistic prediction with Tsallis losses and online learning, we establish a generalized Pinsker inequality for the Bregman divergences $D_α$ generated by the negative $α$-Tsallis entropies -- also known as $β$-divergences. Specifically, for any $p$, $q$ in the relative interior of the probability simplex $Δ^K$, we prove the sharp bound \[ D_α(p\Vert q) \ge \frac{C_{α,K}}{2}\cdot \|p-q\|_1^2, \] and we determine the optimal constant $C_{α,K}$ explicitly for every choice of $(α,K)$.
翻译:Pinsker不等式通过总变差为Kullback-Leibler散度$D_{\textrm{KL}}$提供下界,为将$D_{\textrm{KL}}$控制转化为$\lVert \cdot \rVert_1$控制提供了规范方法。受Tsallis损失概率预测与在线学习应用的启发,本文针对负$α$-Tsallis熵生成的Bregman散度$D_α$(亦称为$β$-散度)建立了广义Pinsker不等式。具体而言,对于概率单纯形$Δ^K$相对内部中的任意$p$、$q$,我们证明了锐利界\[ D_α(p\Vert q) \ge \frac{C_{α,K}}{2}\cdot \|p-q\|_1^2, \] 并为每个$(α,K)$组合显式确定了最优常数$C_{α,K}$。