We study a quantity called discrete layered entropy, which approximates the Shannon entropy within a logarithmic gap. Compared to the Shannon entropy, the discrete layered entropy is piecewise linear, approximates the expected length of the optimal one-to-one non-prefix code, and satisfies an elegant conditioning property. These properties make it useful for approximating the Shannon entropy in linear programming and maximum entropy problems, studying the optimal length of conditional encoding, and bounding the entropy of monotonic mixture distributions. In particular, it can give a bound $I(X;Y)+\log(I(X;Y)+3.4)+1$ for the strong functional representation lemma which is optimal within $2.8$ bits, and significantly improves upon the best known bound.
翻译:我们研究一种称为离散分层熵的量,它在对数差距内逼近香农熵。与香农熵相比,离散分层熵具有分段线性特性,可逼近最优一一对应非前缀码的期望长度,并满足优雅的条件化性质。这些特性使其在线性规划与最大熵问题中逼近香农熵、研究条件编码的最优长度、以及界定单调混合分布的熵时具有实用价值。特别地,该熵可为强函数表示引理提供$I(X;Y)+\log(I(X;Y)+3.4)+1$的界,该界在$2.8$比特范围内达到最优,且显著改进了已知最佳上界。