LLMs have emerged as powerful tools for interpreting multimodal data. In medicine, they hold particular promise for synthesizing large volumes of clinical information into actionable insights and digital health applications. Yet, a major limitation remains their inability to handle time series. To overcome this gap, we present OpenTSLM, a family of Time Series Language Models (TSLMs) created by integrating time series as a native modality to pretrained LLMs, enabling reasoning over multiple time series of any length. We investigate two architectures for OpenTSLM. The first, OpenTSLM-SoftPrompt, models time series implicitly by concatenating learnable time series tokens with text tokens via soft prompting. Although parameter-efficient, we hypothesize that explicit time series modeling scales better and outperforms implicit approaches. We thus introduce OpenTSLM-Flamingo, which integrates time series with text via cross-attention. We benchmark both variants against baselines that treat time series as text tokens or plots, across a suite of text-time-series Chain-of-Thought (CoT) reasoning tasks. We introduce three datasets: HAR-CoT, Sleep-CoT, and ECG-QA-CoT. Across all, OpenTSLM models outperform baselines, reaching 69.9 F1 in sleep staging and 65.4 in HAR, compared to 9.05 and 52.2 for finetuned text-only models. Notably, even 1B-parameter OpenTSLM models surpass GPT-4o (15.47 and 2.95). OpenTSLM-Flamingo matches OpenTSLM-SoftPrompt in performance and outperforms on longer sequences, while maintaining stable memory requirements. By contrast, SoftPrompt grows exponentially in memory with sequence length, requiring around 110 GB compared to 40 GB VRAM when training on ECG-QA with LLaMA-3B. Expert reviews by clinicians find strong reasoning capabilities exhibited by OpenTSLMs on ECG-QA. To facilitate further research, we provide all code, datasets, and models open-source.
翻译:大型语言模型已成为解释多模态数据的强大工具。在医学领域,它们尤其有望将大量临床信息综合转化为可操作的见解和数字健康应用。然而,其主要局限在于无法处理时间序列数据。为克服这一不足,我们提出了OpenTSLM系列时间序列语言模型,该模型通过将时间序列作为原生模态集成至预训练LLM中,实现了对任意长度多元时间序列的推理。我们研究了OpenTSLM的两种架构:第一种OpenTSLM-SoftPrompt通过软提示将可学习的时间序列标记与文本标记隐式连接,虽具参数效率,但我们假设显式时间序列建模具有更好的扩展性和性能表现。因此我们提出OpenTSLM-Flamingo架构,通过交叉注意力机制实现时间序列与文本的融合。在一系列文本-时间序列思维链推理任务中,我们将两种变体与将时间序列视为文本标记或图像的基础模型进行对比评估。我们构建了三个数据集:HAR-CoT、Sleep-CoT和ECG-QA-CoT。在所有任务中,OpenTSLM模型均超越基线模型,在睡眠分期任务中达到69.9 F1值,在人类活动识别任务中达到65.4 F1值,而纯文本微调模型仅获得9.05和52.2。值得注意的是,即使仅10亿参数的OpenTSLM模型也超越了GPT-4o的表现(15.47和2.95)。OpenTSLM-Flamingo在性能上与OpenTSLM-SoftPrompt相当,且在长序列处理上表现更优,同时保持稳定的内存需求;而SoftPrompt的内存消耗随序列长度呈指数增长,在LLaMA-3B模型上训练ECG-QA数据集时需约110GB显存,对比Flamingo架构仅需40GB。临床专家的评估表明OpenTSLM在ECG-QA数据集上展现出强大的推理能力。为促进后续研究,我们已开源所有代码、数据集和模型。