Foundation models succeed when they learn in the native structure of a modality, whether morphology-respecting tokens in language or pixels in vision. Wireless packet traces deserve the same treatment: meaning emerges from layered headers, typed fields, timing gaps, and cross-packet state machines, not flat strings. We present Plume (Protocol Language Understanding Model for Exchanges), a compact 140M-parameter foundation model for 802.11 traces that learns from structured PDML dissections. A protocol-aware tokenizer splits along the dissector field tree, emits gap tokens for timing, and normalizes identifiers, yielding 6.2x shorter sequences than BPE with higher per token information density. Trained on a curated corpus, Plume achieves 74-97% next-packet token accuracy across five real-world failure categories and AUROC >= 0.99 for zero-shot anomaly detection. On the same prediction task, frontier LLMs (Claude Opus 4.6, GPT-5.4) score comparably despite receiving identical protocol context, yet Plume does so with > 600x fewer parameters, fitting on a single GPU at effectively zero marginal cost vs. cloud API pricing, enabling on-prem, privacy-preserving root cause analysis.
翻译:基础模型之所以成功,在于它们能够学习模态的原生结构,无论是语言中尊重形态学的词元还是视觉中的像素。无线数据包追踪同样应得到同等对待:其意义产生于分层的头部、类型化字段、时间间隙以及跨数据包的状态机,而非扁平字符串。我们提出Plume(面向交换的协议语言理解模型),这是一个针对802.11追踪数据、拥有1.4亿参数的紧凑型基础模型,它从结构化的PDML解析结果中学习。协议感知分词器沿着解析器字段树进行分割,为时间间隔生成间隙词元,并对标识符进行归一化处理,从而产生比BPE短6.2倍的序列,且每个词元的信息密度更高。在精心策划的语料库上训练后,Plume在五个真实世界故障类别中实现了74-97%的下一个数据包词元预测准确率,并在零样本异常检测中AUROC >= 0.99。在相同的预测任务上,前沿大语言模型(Claude Opus 4.6, GPT-5.4)尽管接收了相同的协议上下文,得分与Plume相当,但Plume使用的参数数量少600倍以上,可适配于单块GPU,其边际成本相对于云API定价几乎为零,从而实现了本地化、保护隐私的根因分析。