Inspired by the success of Transformer-based models in natural language processing, this paper investigates their potential as foundation models for network traffic analysis. We propose a unified pre-training and fine-tuning pipeline for traffic foundation models. Through fine-tuning, we demonstrate the generalizability of the traffic foundation models in various downstream tasks, including traffic classification, traffic characteristic prediction, and traffic generation. We also compare against non-foundation baselines, demonstrating that the foundation-model backbones achieve improved performance. Moreover, we categorize existing models based on their architecture, input modality, and pre-training strategy. Our findings show that these models can effectively learn traffic representations and perform well with limited labeled datasets, highlighting their potential in future intelligent network analysis systems.
翻译:受Transformer模型在自然语言处理领域成功的启发,本文探讨了其作为网络流量分析基础模型的潜力。我们提出了一种统一的流量基础模型预训练与微调框架。通过微调实验,我们证明了流量基础模型在多种下游任务中的泛化能力,包括流量分类、流量特征预测和流量生成。同时,我们与非基础模型基线进行了对比,结果表明基于基础模型的骨干网络能够获得更优的性能。此外,我们依据架构设计、输入模态和预训练策略对现有模型进行了系统分类。研究发现,这些模型能够有效学习流量表征,并在有限标注数据集条件下表现良好,彰显了其在未来智能网络分析系统中的应用潜力。