This letter proposes a novel three-tier content caching architecture for Vehicular Fog Caching (VFC)-assisted platoon, where the VFC is formed by the vehicles driving near the platoon. The system strategically coordinates storage across local platoon vehicles, dynamic VFC clusters, and cloud server (CS) to minimize content retrieval latency. To efficiently manage distributed storage, we integrate large language models (LLMs) for real-time and intelligent caching decisions. The proposed approach leverages LLMs' ability to process heterogeneous information, including user profiles, historical data, content characteristics, and dynamic system states. Through a designed prompting framework encoding task objectives and caching constraints, the LLMs formulate caching as a decision-making task, and our hierarchical deterministic caching mapping strategy enables adaptive requests prediction and precise content placement across three tiers without frequent retraining. Simulation results demonstrate the advantages of our proposed caching scheme.
翻译:本文提出了一种面向车辆雾缓存辅助编队的新型三层内容缓存架构,其中车辆雾缓存由编队附近行驶的车辆构成。该系统通过战略性地协调本地编队车辆、动态车辆雾缓存集群与云服务器之间的存储资源,以最小化内容检索时延。为高效管理分布式存储,我们引入大语言模型实现实时智能缓存决策。所提方法利用大语言模型处理异构信息的能力,包括用户画像、历史数据、内容特征与动态系统状态。通过设计的提示框架编码任务目标与缓存约束,大语言模型将缓存问题建模为决策任务,而我们的分层确定性缓存映射策略能够实现自适应请求预测与三层存储间的精准内容部署,且无需频繁重训练。仿真结果验证了所提缓存方案的优势。