Recently, the Foundation Error Correction Code Transformer (FECCT) has emerged as a promising universal channel decoder, achieving competitive decoding performance across diverse code families by relying on a single shared model backbone, optionally followed by code-specific retraining. Despite this flexibility, the high computational complexity and large parameter footprint of transformer-based decoders present substantial obstacles to practical deployment. To address these challenges, we investigate structured pruning for FECCT and propose Spectral-Aligned Pruning (SAP), a structure-aware framework that enables cross-code reuse of structured pruning masks across codes by leveraging the spectrum of the corresponding bipartite graph. After pruning, SAP performs per-code recovery via parameter-efficient low-rank adaptation (LoRA), enabling a shared pruned backbone while storing only small code-specific adapter parameters. Experiments across diverse codes show that SAP achieves decoding performance comparable to dedicated per-code pruning, while enabling substantial reductions in computational cost and model memory footprint through kernel-level structured pruning.
翻译:近年来,基础纠错码Transformer(FECCT)作为一种有前景的通用信道解码器崭露头角,通过依赖单一共享模型主干(可选地进行码本特定重训练),在不同码族中实现了具有竞争力的解码性能。尽管具备这种灵活性,但基于Transformer的解码器的高计算复杂性和庞大参数量仍对其实际部署构成了显著障碍。为应对这些挑战,我们研究了FECCT的结构化剪枝,并提出谱对齐剪枝(SAP)——一种结构感知框架,通过利用对应二分图的谱特性,实现跨码本的结构化剪枝掩码复用。剪枝后,SAP通过参数高效的低秩自适应(LoRA)进行逐码本恢复,从而在共享剪枝主干的同时仅存储少量码本特定的适配器参数。跨多种码本的实验表明,SAP实现了与专用逐码本剪枝相当的解码性能,同时通过内核级结构化剪枝显著降低了计算成本和模型内存占用。