Just Noticeable Distortion (JND)-guided pre-filter is a promising technique for improving the perceptual compression efficiency of image coding. However, existing methods are often computationally expensive, and the field lacks standardized benchmarks for fair comparison. To address these challenges, this paper introduces a twofold contribution. First, we develop and open-source FJNDF-Pytorch, a unified benchmark for frequency-domain JND-Guided pre-filters. Second, leveraging this platform, we propose a complete learning framework for a novel, lightweight Convolutional Neural Network (CNN). Experimental results demonstrate that our proposed method achieves state-of-the-art compression efficiency, consistently outperforming competitors across multiple datasets and encoders. In terms of computational cost, our model is exceptionally lightweight, requiring only 7.15 GFLOPs to process a 1080p image, which is merely 14.1% of the cost of recent lightweight network. Our work presents a robust, state-of-the-art solution that excels in both performance and efficiency, supported by a reproducible research platform. The open-source implementation is available at https://github.com/viplab-fudan/FJNDF-Pytorch.
翻译:基于最小可觉差(JND)引导的预滤波器是提升图像编码感知压缩效率的一种有前景的技术。然而,现有方法通常计算成本高昂,且该领域缺乏用于公平比较的标准化基准。为应对这些挑战,本文提出了双重贡献。首先,我们开发并开源了FJNDF-Pytorch,这是一个用于频域JND引导预滤波器的统一基准平台。其次,利用该平台,我们提出了一种用于新型轻量级卷积神经网络(CNN)的完整学习框架。实验结果表明,我们提出的方法实现了最先进的压缩效率,在多个数据集和编码器上持续优于竞争对手。在计算成本方面,我们的模型异常轻量,处理一张1080p图像仅需7.15 GFLOPs,仅为近期轻量级网络成本的14.1%。我们的工作提供了一个在性能和效率上均表现优异的、鲁棒且最先进的解决方案,并得到了一个可复现的研究平台的支持。开源实现可在 https://github.com/viplab-fudan/FJNDF-Pytorch 获取。