Relative entropy coding (REC) algorithms encode a random sample following a target distribution $Q$, using a coding distribution $P$ shared between the sender and receiver. Sadly, general REC algorithms suffer from prohibitive encoding times, at least on the order of $2^{D_{\text{KL}}[Q||P]}$, and faster algorithms are limited to very specific settings. This work addresses this issue by introducing a REC scheme utilizing space partitioning to reduce runtime in practical scenarios. We provide theoretical analyses of our method and demonstrate its effectiveness with both toy examples and practical applications. Notably, our method successfully handles REC tasks with $D_{\text{KL}}[Q||P]$ about three times greater than what previous methods can manage, and reduces the bitrate by approximately 5-15% in VAE-based lossless compression on MNIST and INR-based lossy compression on CIFAR-10, compared to previous methods, significantly improving the practicality of REC for neural compression.
翻译:相对熵编码(REC)算法利用发送方和接收方共享的编码分布$P$,对遵循目标分布$Q$的随机样本进行编码。遗憾的是,通用REC算法存在编码时间过长的问题,其复杂度至少为$2^{D_{\text{KL}}[Q||P]}$量级,而更快的算法仅适用于非常特定的场景。本研究通过引入一种利用空间划分的REC方案来解决此问题,以在实际场景中降低运行时间。我们提供了该方法的理论分析,并通过示例实验与实际应用验证了其有效性。值得注意的是,本方法成功处理的REC任务中$D_{\text{KL}}[Q||P]$值可达先前方法能处理范围的三倍左右;在基于VAE的MNIST无损压缩与基于INR的CIFAR-10有损压缩实验中,相较于先前方法,比特率降低了约5-15%,显著提升了REC在神经压缩领域的实用性。