No-Reference Image Quality Assessment (IQA) aims at estimating image quality in accordance with subjective human perception. However, most existing NR-IQA methods focus on exploring increasingly complex networks or components to improve the final performance. Such practice imposes great limitations and complexity on IQA methods, especially when they are applied to high-resolution (HR) images in the real world. Actually, most images own high spatial redundancy, especially for those HR data. To further exploit the characteristic and alleviate the issue above, we propose a new framework for Image Quality Assessment with compressive Sampling (dubbed S-IQA), which consists of three components: (1) The Flexible Sampling Module (FSM) samples the image to obtain measurements at an arbitrary ratio. (2) Vision Transformer with the Adaptive Embedding Module (AEM) makes measurements of uniform size and extracts deep features (3) Dual Branch (DB) allocates weight for every patch and predicts the final quality score. Experiments show that our proposed S-IQA achieves state-of-the-art result on various datasets with less data usage.
翻译:无参考图像质量评估(No-Reference Image Quality Assessment, IQA)旨在根据人类主观感知估计图像质量。然而,现有大多数无参考IQA方法侧重于探索日益复杂的网络或组件以提升最终性能。这种做法对IQA方法施加了极大的局限性和复杂性,尤其是在应用于现实世界中的高分辨率(HR)图像时。实际上,多数图像(尤其是高分辨率数据)具有较高的空间冗余性。为充分利用该特性并缓解上述问题,我们提出了一个基于压缩采样的图像质量评估新框架(简称S-IQA),该框架包含三个组件:(1)灵活采样模块(FSM)以任意采样率对图像进行采样以获取测量值;(2)配备自适应嵌入模块(AEM)的视觉Transformer将测量值统一尺寸并提取深层特征;(3)双分支模块(DB)为每个图像块分配权重并预测最终质量分数。实验表明,我们提出的S-IQA在多个数据集上以更少的数据用量取得了最先进的性能。