The online caching problem aims to minimize cache misses when serving a sequence of requests under a limited cache size. While naive learning-augmented caching algorithms achieve ideal $1$-consistency, they lack robustness guarantees. Existing robustification methods either sacrifice $1$-consistency or introduce excessive computational overhead. In this paper, we introduce Guard, a lightweight robustification framework that enhances the robustness of a broad class of learning-augmented caching algorithms to $2H_{k-1} + 2$, while preserving their $1$-consistency. Guard achieves the current best-known trade-off between consistency and robustness, with only O(1) additional per-request overhead, thereby maintaining the original time complexity of the base algorithm. Extensive experiments across multiple real-world datasets and prediction models validate the effectiveness of Guard in practice.
翻译:在线缓存问题旨在有限缓存容量下服务请求序列时最小化缓存未命中率。尽管朴素的学习增强缓存算法可实现理想的$1$-致性,但其缺乏鲁棒性保证。现有鲁棒化方法要么牺牲$1$-致性,要么引入过高计算开销。本文提出Guard——一种轻量级鲁棒化框架,可将广泛类型的学习增强缓存算法的鲁棒性提升至$2H_{k-1} + 2$,同时保持其$1$-致性。Guard实现了当前已知最优的致性与鲁棒性权衡,仅需O(1)的额外单请求开销,从而维持基础算法原有的时间复杂度。基于多个真实数据集与预测模型的大规模实验验证了Guard在实际应用中的有效性。