Recent advancements in language modeling have shown promising results when applied to time series data. In particular, fine-tuning pre-trained large language models (LLMs) for time series classification tasks has achieved state-of-the-art (SOTA) performance on standard benchmarks. However, these LLM-based models have a significant drawback due to the large model size, with the number of trainable parameters in the millions. In this paper, we propose an alternative approach to leveraging the success of language modeling in the time series domain. Instead of fine-tuning LLMs, we utilize a language embedding model to embed time series and then pair the embeddings with a simple classification head composed of convolutional neural networks (CNN) and multilayer perceptron (MLP). We conducted extensive experiments on well-established time series classification benchmark datasets. We demonstrated LETS-C not only outperforms the current SOTA in classification accuracy but also offers a lightweight solution, using only 14.5% of the trainable parameters on average compared to the SOTA model. Our findings suggest that leveraging language encoders to embed time series data, combined with a simple yet effective classification head, offers a promising direction for achieving high-performance time series classification while maintaining a lightweight model architecture.
翻译:近年来,语言建模在应用于时间序列数据时已展现出有前景的结果。特别是在时间序列分类任务中对预训练大语言模型进行微调,已在标准基准测试中取得了最先进的性能。然而,这些基于LLM的模型存在一个显著缺点,即模型规模庞大,可训练参数数量达到数百万级别。本文提出了一种替代方法,以利用语言建模在时间序列领域的成功。我们不再对LLM进行微调,而是利用语言嵌入模型对时间序列进行嵌入,然后将这些嵌入与一个由卷积神经网络和多层感知机构成的简单分类头配对。我们在成熟的时间序列分类基准数据集上进行了广泛的实验。结果表明,LETS-C不仅在分类准确率上超越了当前的最先进模型,而且提供了一种轻量级解决方案,其平均使用的可训练参数仅为最先进模型的14.5%。我们的研究结果表明,利用语言编码器嵌入时间序列数据,并结合一个简单而有效的分类头,为实现高性能时间序列分类同时保持轻量级模型架构提供了一个有前景的方向。