Recently, the Foundation Error Correction Code Transformer (FECCT) has emerged as a promising universal channel decoder, achieving competitive decoding performance across diverse code families by relying on a single shared model backbone, optionally followed by code-specific retraining. Despite this flexibility, the high computational complexity and large parameter footprint of transformer-based decoders present substantial obstacles to practical deployment. To address these challenges, we investigate structured pruning for FECCT and propose Spectral-Aligned Pruning (SAP), a structure-aware framework that enables cross-code reuse of structured pruning masks across codes by leveraging the spectrum of the corresponding bipartite graph. After pruning, SAP performs per-code recovery via parameter-efficient low-rank adaptation (LoRA), enabling a shared pruned backbone while storing only small code-specific adapter parameters. Experiments across diverse codes show that SAP achieves decoding performance comparable to dedicated per-code pruning, while enabling substantial reductions in computational cost and model memory footprint through kernel-level structured pruning.
翻译:近年来,基础纠错码Transformer(FECCT)作为一种前景广阔的通用信道解码器崭露头角,它通过依赖单一共享模型主干(可选地辅以码本特定的重训练),在多种码族上实现了具有竞争力的解码性能。尽管具备这种灵活性,但基于Transformer的解码器存在计算复杂度高、参数量大的问题,这对其实际部署构成了显著障碍。为应对这些挑战,我们研究了针对FECCT的结构化剪枝,并提出谱对齐剪枝(SAP)——一种结构感知框架。该框架通过利用对应二分图的谱特性,实现了跨码本的结构化剪枝掩码复用。剪枝后,SAP通过参数高效的低秩适应(LoRA)进行逐码本恢复,从而在共享剪枝主干的同时,仅需存储少量码本特定的适配器参数。在多种码本上的实验表明,SAP实现了与专用逐码本剪枝相当的解码性能,同时通过内核级结构化剪枝显著降低了计算成本和模型内存占用。