In this work, we present an arbitrary-scale super-resolution (SR) method to enhance the resolution of scientific data, which often involves complex challenges such as continuity, multi-scale physics, and the intricacies of high-frequency signals. Grounded in operator learning, the proposed method is resolution-invariant. The core of our model is a hierarchical neural operator that leverages a Galerkin-type self-attention mechanism, enabling efficient learning of mappings between function spaces. Sinc filters are used to facilitate the information transfer across different levels in the hierarchy, thereby ensuring representation equivalence in the proposed neural operator. Additionally, we introduce a learnable prior structure that is derived from the spectral resizing of the input data. This loss prior is model-agnostic and is designed to dynamically adjust the weighting of pixel contributions, thereby balancing gradients effectively across the model. We conduct extensive experiments on diverse datasets from different domains and demonstrate consistent improvements compared to strong baselines, which consist of various state-of-the-art SR methods.
翻译:本文提出了一种任意尺度超分辨率(SR)方法,用于提升科学数据的分辨率,这类数据通常涉及连续性、多尺度物理现象及高频信号的复杂性等挑战。该方法基于算子学习,具有分辨率不变性。模型核心为层级神经算子,采用伽辽金型自注意力机制,能够高效学习函数空间之间的映射。通过使用Sinc滤波器实现层级间信息传递,从而确保所提神经算子中的表示等价性。此外,我们引入一种从输入数据频谱缩放导出的可学习先验结构,该损失先验与模型无关,可动态调整像素贡献权重,有效平衡模型中的梯度分布。我们在不同领域的多样数据集上进行了广泛实验,结果表明,与包含多种最先进超分辨率方法的强基线相比,本方法始终展现出更优性能。