Due to the fundamental connection between next-symbol prediction and compression, modern predictive models, such as large language models (LLMs), can be combined with entropy coding to achieve compression rates that surpass those of standard compression algorithms. However, this approach relies on the assumption that the predictive model produces identical output distributions at both the encoder and decoder, since even small mismatches can cause the decoding to fail. This assumption often fails with complex predictive models, particularly those based on neural networks, a phenomenon referred to as non-determinism. In this work, we propose a new compression algorithm based on next-token prediction that is robust to arbitrarily large, but structured, prediction mismatches. We prove the correctness of the proposed scheme under a formal mismatch certification, characterize its theoretical performance, and validate it experimentally on real datasets. Our results demonstrate reliable operation within the certified mismatch regime while achieving compression ratios that exceed those of commonly used compression methods.
翻译:由于下一个符号预测与压缩之间的基本联系,现代预测模型(如大语言模型LLMs)可与熵编码相结合,实现超越标准压缩算法的压缩率。然而,该方法依赖于预测模型在编码器和解码器端产生相同输出分布的假设,因为即使微小的失配也可能导致解码失败。对于复杂预测模型(尤其是基于神经网络的模型),这一假设往往难以成立,这种现象被称为非确定性。本研究提出一种基于下一词元预测的新型压缩算法,该算法能够抵抗任意大但结构化的预测失配。我们在形式化失配认证下证明了所提方案的正确性,刻画了其理论性能,并在真实数据集上进行了实验验证。结果表明,该算法在认证失配范围内能够可靠运行,同时实现了优于常用压缩方法的压缩比。