State-space models (SSMs) have recently emerged as efficient alternatives to computationally intensive architectures such as Transformers for sequence modeling. However, their training typically relies on static loss functions, which may be suboptimal at different stages of learning. In this work, we introduce a hybrid model that integrates the Hyena architecture with a Dynamic Loss Network (DLN) under a Learning-to-Teach (L2T) paradigm, referred to as L2T-DLN. In this framework, the Hyena model serves as a student whose loss function is adapted online, while a teacher model, equipped with a memory of the student's past performance, guides the DLN to dynamically trade off the primary cross-entropy objective and a regularization term. We evaluate the proposed L2T-Hyena model on the Penn Treebank (PTB) and WikiText-103 language modeling benchmarks and compare it against both a vanilla Hyena SSM and a Transformer baseline. On PTB, our model achieves a validation perplexity of 102.6, representing a substantial improvement over the 110.5 obtained by the vanilla Hyena trained with a static loss function and 121.28 achieved by the Transformer baseline. Similar gains are observed on WikiText-103, where L2T-Hyena reaches a validation perplexity of 68.3, outperforming vanilla Hyena (73.7) and Transformer (89.8). These results indicate that coupling SSMs with adaptive loss functions can significantly enhance both the quality and efficiency of deep learning models for sequential data and hold strong promise for applications in natural language processing, time-series analysis, and biological signal processing.
翻译:暂无翻译