Privacy-preserving computation enables language model inference directly on encrypted data yet suffers from prohibitive latency and communication overheads, primarily due to nonlinear functions. Removing nonlinearities, however, can trigger one of two failure modes restricting the potential for nonlinearity removal: entropy collapse in deeper layers, which destabilizes training, and entropic overload in early layers, causing under-utilization of attention heads. To address these challenges, we introduce AERO, an entropy-guided framework to strategically eliminates costly nonlinear operations from transformer architectures, which employs an adaptive recalibration through a head-wise entropy regularizer with learnable per-head strengths, enabling each head to adjust its entropy level while penalizing extreme entropies and fostering functional diversity through a tolerance margin. Experiments show AERO can save 3.4$\times$ communication and 1.4$\times$ latency, without any performance penalty.
翻译:隐私保护计算使得语言模型能够在加密数据上直接进行推理,但主要由于非线性函数的存在,其面临着过高的延迟和通信开销。然而,完全移除非线性函数会引发两种限制非线性移除潜力的失效模式:深层中的熵塌缩,这会破坏训练稳定性;以及早期层中的熵过载,导致注意力头未被充分利用。为解决这些挑战,我们提出了AERO,一种熵引导框架,用于策略性地从Transformer架构中移除代价高昂的非线性操作。该框架通过一个具有可学习逐头强度的头级熵正则化器实现自适应重校准,使每个注意力头能够调整其熵水平,同时惩罚极端熵值,并通过一个容差边界促进功能多样性。实验表明,AERO能够在没有任何性能损失的情况下,节省3.4倍的通信开销和1.4倍的延迟。