Time series classification (TSC) spans diverse application scenarios, yet labeled data are often scarce, making task-specific training costly and inflexible. Recent reasoning-oriented large language models (LLMs) show promise in understanding temporal patterns, but purely zero-shot usage remains suboptimal. We propose FETA, a multi-agent framework for training-free TSC via exemplar-based in-context reasoning. FETA decomposes a multivariate series into channel-wise subproblems, retrieves a few structurally similar labeled examples for each channel, and leverages a reasoning LLM to compare the query against these exemplars, producing channel-level labels with self-assessed confidences; a confidence-weighted aggregator then fuses all channel decisions. This design eliminates the need for pretraining or fine-tuning, improves efficiency by pruning irrelevant channels and controlling input length, and enhances interpretability through exemplar grounding and confidence estimation. On nine challenging UEA datasets, FETA achieves strong accuracy under a fully training-free setting, surpassing multiple trained baselines. These results demonstrate that a multi-agent in-context reasoning framework can transform LLMs into competitive, plug-and-play TSC solvers without any parameter training. The code is available at https://github.com/SongyuanSui/FETATSC.
翻译:时间序列分类(TSC)涵盖多种应用场景,然而标注数据往往稀缺,导致针对特定任务的训练成本高昂且灵活性不足。近期面向推理的大语言模型(LLM)在理解时序模式方面展现出潜力,但纯粹的零样本使用方式仍不够理想。本文提出FETA,一种基于示例的上下文推理多智能体框架,用于实现无需训练的时间序列分类。FETA将多元时间序列分解为通道级子问题,为每个通道检索若干结构相似的已标注示例,并利用推理型LLM将查询序列与这些示例进行比较,生成带有自评估置信度的通道级标签;随后通过置信度加权聚合器融合所有通道的决策。该设计无需预训练或微调,通过剪枝无关通道和控制输入长度提升效率,并借助示例锚定和置信度估计增强可解释性。在九个具有挑战性的UEA数据集上,FETA在完全无需训练的条件下取得了优异的分类精度,超越了多个经过训练的基线模型。这些结果表明,多智能体上下文推理框架能够将LLM转化为具有竞争力的即插即用式TSC求解器,且无需任何参数训练。代码已开源:https://github.com/SongyuanSui/FETATSC。