Task offloading, crucial for balancing computational loads across devices in networks such as the Internet of Things, poses significant optimization challenges, including minimizing latency and energy usage under strict communication and storage constraints. While traditional optimization falls short in scalability; and heuristic approaches lack in achieving optimal outcomes, Reinforcement Learning (RL) offers a promising avenue by enabling the learning of optimal offloading strategies through iterative interactions. However, the efficacy of RL hinges on access to rich datasets and custom-tailored, realistic training environments. To address this, we introduce PeersimGym, an open-source, customizable simulation environment tailored for developing and optimizing task offloading strategies within computational networks. PeersimGym supports a wide range of network topologies and computational constraints and integrates a \textit{PettingZoo}-based interface for RL agent deployment in both solo and multi-agent setups. Furthermore, we demonstrate the utility of the environment through experiments with Deep Reinforcement Learning agents, showcasing the potential of RL-based approaches to significantly enhance offloading strategies in distributed computing settings. PeersimGym thus bridges the gap between theoretical RL models and their practical applications, paving the way for advancements in efficient task offloading methodologies.
翻译:任务卸载对于平衡物联网等网络中设备间的计算负载至关重要,但在严格的通信和存储约束下,最小化延迟和能耗等优化问题极具挑战性。传统优化方法在可扩展性方面存在不足,启发式方法难以实现最优结果,而强化学习(RL)通过迭代交互学习最优卸载策略,为此提供了有前景的途径。然而,RL的有效性依赖于丰富的数据集和定制化的逼真训练环境。为此,我们提出PeersimGym,一个开源的、可定制的仿真环境,专为计算网络中任务卸载策略的开发与优化而设计。PeersimGym支持广泛的网络拓扑结构和计算约束,并整合了基于PettingZoo的接口,适用于单智能体和多智能体设置下的RL智能体部署。此外,我们通过深度强化学习代理实验展示了该环境的实用性,突显了基于RL的方法在分布式计算环境中显著优化卸载策略的潜力。PeersimGym因此弥合了理论RL模型与其实际应用之间的鸿沟,为高效任务卸载方法的进步铺平了道路。