Compact pretrained bidirectional encoders remain the backbone of industrial NLP under tight compute and memory budgets. Their effectiveness stems from self-attention's ability to deliver high-quality bidirectional contextualization with sequence-level parallelism, as popularized by BERT-style architectures. Recently, Avey was introduced as an autoregressive, attention-free alternative that naturally admits an encoder-only adaptation. In this paper, we reformulate Avey for the encoder-only paradigm and propose several innovations to its architecture, including decoupled static and dynamic parameterizations, stability-oriented normalization, and neural compression. Results show that this reformulated architecture compares favorably to four widely used Transformer-based encoders, consistently outperforming them on standard token-classification and information-retrieval benchmarks while scaling more efficiently to long contexts.
翻译:紧凑的预训练双向编码器在严格的计算和内存预算下,仍然是工业自然语言处理的支柱。其有效性源于自注意力机制能够通过序列级并行化提供高质量的双向上下文建模,这一点已由BERT风格的架构所普及。最近,Avey作为一种自回归、无注意力的替代方案被提出,它自然地允许仅编码器适配。在本文中,我们为仅编码器范式重新设计了Avey,并对其架构提出了若干创新,包括解耦的静态与动态参数化、面向稳定性的归一化以及神经压缩。结果表明,这种重新设计的架构与四种广泛使用的基于Transformer的编码器相比具有优势,在标准的词元分类和信息检索基准测试中持续优于它们,同时能更高效地扩展到长上下文场景。