Modeling long-range dependencies across sequences is a longstanding goal in machine learning and has led to architectures, such as state space models, that dramatically outperform Transformers on long sequences. However, these impressive empirical gains have been by and large demonstrated on benchmarks (e.g. Long Range Arena), where models are randomly initialized and trained to predict a target label from an input sequence. In this work, we show that random initialization leads to gross overestimation of the differences between architectures and that pretraining with standard denoising objectives, using $\textit{only the downstream task data}$, leads to dramatic gains across multiple architectures and to very small gaps between Transformers and state space models (SSMs). In stark contrast to prior works, we find vanilla Transformers to match the performance of S4 on Long Range Arena when properly pretrained, and we improve the best reported results of SSMs on the PathX-256 task by 20 absolute points. Subsequently, we analyze the utility of previously-proposed structured parameterizations for SSMs and show they become mostly redundant in the presence of data-driven initialization obtained through pretraining. Our work shows that, when evaluating different architectures on supervised tasks, incorporation of data-driven priors via pretraining is essential for reliable performance estimation, and can be done efficiently.
翻译:模拟跨序列的长程依赖关系是机器学习领域长期以来的目标,并催生了如状态空间模型等架构,这些架构在长序列任务上显著优于Transformer。然而,这些令人印象深刻的实证成果主要在基准测试(例如Long Range Arena)中得到验证,其中模型被随机初始化并训练以从输入序列预测目标标签。在本工作中,我们表明随机初始化会导致对架构间差异的严重高估,而使用$\textit{仅下游任务数据}$的标准去噪目标进行预训练,能够显著提升多种架构的性能,并大幅缩小Transformer与状态空间模型(SSM)之间的差距。与先前工作截然不同的是,我们发现原始Transformer在经过适当预训练后,在Long Range Arena上能达到与S4相当的性能,并将SSM在PathX-256任务上的最佳结果提高了20个绝对百分点。随后,我们分析了先前提出的SSM结构化参数化的效用,并表明这些参数化在通过预训练获得数据驱动初始化后基本变得多余。我们的工作表明,在评估不同架构的监督任务性能时,通过预训练融入数据驱动的先验知识对于可靠的性能估计至关重要,且这一过程可以高效实现。