State-space models (SSMs) have recently emerged as efficient alternatives to computationally intensive architectures such as Transformers for sequence modeling. However, their training typically relies on static loss functions, which may be suboptimal at different stages of learning. In this work, we introduce a hybrid model that integrates the Hyena architecture with a Dynamic Loss Network (DLN) under a Learning-to-Teach (L2T) paradigm, referred to as L2T-DLN. In this framework, the Hyena model serves as a student whose loss function is adapted online, while a teacher model, equipped with a memory of the student's past performance, guides the DLN to dynamically trade off the primary cross-entropy objective and a regularization term. We evaluate the proposed L2T-Hyena model on the Penn Treebank (PTB) and WikiText-103 language modeling benchmarks and compare it against both a vanilla Hyena SSM and a Transformer baseline. On PTB, our model achieves a validation perplexity of 102.6, representing a substantial improvement over the 110.5 obtained by the vanilla Hyena trained with a static loss function and 121.28 achieved by the Transformer baseline. Similar gains are observed on WikiText-103, where L2T-Hyena reaches a validation perplexity of 68.3, outperforming vanilla Hyena (73.7) and Transformer (89.8). These results indicate that coupling SSMs with adaptive loss functions can significantly enhance both the quality and efficiency of deep learning models for sequential data and hold strong promise for applications in natural language processing, time-series analysis, and biological signal processing.
翻译:状态空间模型(SSMs)最近作为序列建模中计算密集型架构(如Transformer)的高效替代方案而兴起。然而,其训练通常依赖于静态损失函数,这在学习的不同阶段可能并非最优。在本工作中,我们提出了一种混合模型,该模型在学习-教学(L2T)范式下将Hyena架构与动态损失网络(DLN)相结合,称为L2T-DLN。在此框架中,Hyena模型作为学生,其损失函数被在线调整,而一个配备有学生过往性能记忆的教师模型则指导DLN动态权衡主要的交叉熵目标和一个正则化项。我们在Penn Treebank(PTB)和WikiText-103语言建模基准上评估了所提出的L2T-Hyena模型,并将其与原始的Hyena SSM以及一个Transformer基线进行了比较。在PTB上,我们的模型取得了102.6的验证困惑度,这相对于使用静态损失函数训练的原始Hyena所获得的110.5以及Transformer基线所取得的121.28有显著提升。在WikiText-103上也观察到了类似的增益,L2T-Hyena达到了68.3的验证困惑度,优于原始Hyena(73.7)和Transformer(89.8)。这些结果表明,将SSMs与自适应损失函数相结合,可以显著提升深度学习模型处理序列数据的质量和效率,并在自然语言处理、时间序列分析和生物信号处理等应用中展现出巨大潜力。