Cooperative edge caching in overlapping zones creates intricate coupling among Base Station (BS) decisions, making content replacement highly sensitive to topology and temporal reuse. While heuristics are often myopic and Deep Reinforcement Learning lacks robustness under dynamics, this paper proposes a Large Language Model (LLM)-based multi-BS orchestrator. The LLM acts as the sole autonomous engine, interacting with the environment via a validated text-to-action interface. Each time slot, the system renders environmental states -- including cache inventories and frequency statistics -- into prompts, parsing LLM-generated decisions against strict feasibility constraints. We align the model through a two-stage paradigm: Supervised Fine-Tuning on oracle trajectories for syntax and initialization, followed by Group Relative Policy Optimization. The latter employs an ``opportunity-aware'' reward that prioritizes multi-step cooperative gains relative to a No-Operation baseline. Evaluated on identical request traces, the orchestrator approaches exhaustive-search performance (0.610 vs.\ 0.617 in a 5-BS scenario), outperforms classical baselines (e.g., +4.1\% over least-frequently used), and demonstrates robust zero-shot transfer across varying cache capacities, library sizes, and user densities.
翻译:重叠区域中的协作边缘缓存在基站决策间形成了复杂的耦合关系,使得内容替换对网络拓扑和时间复用高度敏感。虽然启发式方法通常缺乏远见,而深度强化学习在动态环境下缺乏鲁棒性,本文提出了一种基于大型语言模型的多基站协同编排器。该LLM作为唯一的自主引擎,通过一个经过验证的文本到动作接口与环境交互。每个时隙,系统将环境状态——包括缓存库存和访问频率统计——渲染为提示词,并在严格的可行性约束下解析LLM生成的决策。我们通过两阶段范式对模型进行对齐:首先在最优轨迹上进行监督微调以学习语法和初始化,随后进行组相对策略优化。后者采用一种“机会感知”奖励机制,该机制相对于无操作基线,优先考虑多步协作收益。在相同请求轨迹上的评估表明,该编排器接近穷举搜索的性能(在5基站场景中为0.610对比0.617),优于经典基线方法(例如,比最不经常使用策略高4.1%),并在不同的缓存容量、内容库规模和用户密度下展现出强大的零样本迁移能力。