Audio super-resolution aims to enhance low-resolution signals by creating high-frequency content. In this work, we modify the architecture of AERO (a state-of-the-art system for this task) for music super-resolution. SPecifically, we replace its original Attention and LSTM layers with Mamba, a State Space Model (SSM), across all network layers. Mamba is capable of effectively substituting the mentioned modules, as it offers a mechanism similar to that of Attention while also functioning as a recurrent network. With the proposed AEROMamba, training requires 2-4x less GPU memory, since Mamba exploits the convolutional formulation and leverages GPU memory hierarchy. Additionally, during inference, Mamba operates in constant memory due to recurrence, avoiding memory growth associated with Attention. This results in a 14x speed improvement using 5x less GPU. Subjective listening tests (0 to 100 scale) show that the proposed model surpasses the AERO model. In the MUSDB dataset, degraded signals scored 38.22, while AERO and AEROMamba scored 60.03 and 66.74, respectively. For the PianoEval dataset, scores were 72.92 for degraded signals, 76.89 for AERO, and 84.41 for AEROMamba.
翻译:音频超分辨率旨在通过生成高频内容来增强低分辨率信号。本研究针对音乐超分辨率任务,对当前最先进的AERO系统架构进行改进。具体而言,我们在所有网络层中用状态空间模型Mamba替换原有的注意力机制与LSTM层。Mamba能够有效替代上述模块,因为它既提供类注意力机制的计算模式,又具备循环网络的功能特性。采用所提出的AEROMamba架构后,由于Mamba利用卷积形式化计算并优化GPU内存层级访问,训练所需GPU内存减少至原方法的1/2-1/4。此外,在推理阶段,Mamba凭借循环特性以恒定内存运行,避免了注意力机制导致的内存增长问题。这使得在GPU使用量减少5倍的情况下实现14倍加速。主观听觉测试(0-100分制)表明,所提模型性能超越AERO模型:在MUSDB数据集中,退化信号得分为38.22,AERO与AEROMamba分别获得60.03与66.74分;在PianoEval数据集中,退化信号得分为72.92,AERO为76.89分,AEROMamba达到84.41分。