Assessing whether two datasets are distributionally consistent has become a central theme in modern scientific analysis, particularly as generative artificial intelligence is increasingly used to produce synthetic datasets whose fidelity must be rigorously validated against the original data on which they are trained, a task made more challenging by the continued growth in data volume and problem dimensionality. In this work, we propose the use of arithmetic coding to provide a lossless and invertible compression of datasets under a physics-informed probabilistic representation. Datasets that share the same underlying physical correlations admit comparable optimal descriptions, while discrepancies in those correlations-arising from miscalibration, mismodeling, or bias-manifest as an irreducible excess in code length. This excess codelength defines an operational fidelity metric, quantified directly in bits through differences in achievable compression length relative to a physics-inspired reference distribution. We demonstrate that this metric is global, interpretable, additive across components, and asymptotically optimal in the Shannon sense. Moreover, we show that differences in codelength correspond to differences in expected negative log-likelihood evaluated under the same physics-informed reference model. As a byproduct, we also demonstrate that our compression approach achieves a higher compression ratio than traditional general-purpose algorithms such as gzip. Our results establish lossless, physics-aware compression based on arithmetic coding not as an end in itself, but as a measurement instrument for testing the fidelity between datasets.
翻译:暂无翻译