Cooperative edge caching in overlapping zones couples Base Station (BS) decisions, making content replacement sensitive to spatial topology and temporal reuse. Conventional heuristics suffer from myopia, while Deep Reinforcement Learning relies on brittle numerical representations and needs prohibitive retraining under topological or traffic dynamics. This paper studies a centralized, cooperative multi-BS cache-replacement controller driven by a Large Language Model (LLM) within a deterministic text-to-action loop. At each time slot, the global cache state is rendered into a prompt encapsulating each BS's inventory, deduplicated requests, and multi-scale frequency summaries. The LLM generates one decision line per BS. A strict parser and feasibility checker then either accept the joint action or fall back to an all-BS NoOp action. We align the LLM via two-stage training: Supervised Fine-Tuning on look-ahead expert trajectories to acquire action syntax and robust initialization, followed by Group Relative Policy Optimization. This employs an 'opportunity-aware' reward, using multi-step cooperative hit rate gains relative to a NoOp baseline as the primary signal, plus penalties for invalid outputs. We focus on reactive replacement of equal-sized files, max one replacement per BS per slot, and insertions restricted to current requests. Evaluating on identical request traces and association graphs, our orchestrator approaches a single-step exhaustive-search reference (0.610 vs. 0.617 in a 5-BS scenario), surpasses classical baselines (+4.1% over least-frequently used), and exhibits robust zero-shot transfer across cache capacity, library size, popularity skewness, and user density. Code is available at https://github.com/gracefulning/CoopLLM-Cache.
翻译:暂无翻译