In this paper, a method for predicting the resources required for an intelligent vehicle client using a three-layer vehicular computing architecture is proposed. This method leverages Q-Learning to optimize resource allocation and enhance overall system performance. This approach employs reinforcement learning capabilities to provide a dynamic and adaptive strategy for resource management in a fog computing environment. The key findings of this study indicate that Q-learning can effectively predict the appropriate allocation of resources by learning from past experiences and making informed decisions. Through continuous training and updating of the Q-learning agent, the system can adapt to changing conditions and make resource allocation decisions based on real-time information. The experimental results demonstrate the effectiveness of the proposed method in optimizing resource allocation. The Q-learning agent predicts the optimal values for memory, bandwidth, and processor. These predictions not only minimize resource consumption but also meet the performance requirements of the fog system. Implementations show that this method improves the average task processing time in compared to other methods evaluated in this study
翻译:本文提出了一种利用三层车载计算架构预测智能车辆客户端所需资源的方法。该方法通过Q学习优化资源分配并提升整体系统性能。该方案运用强化学习能力,为雾计算环境提供动态自适应的资源管理策略。本研究的关键发现表明,Q学习能够通过历史经验学习和智能决策,有效预测资源的合理分配。通过对Q学习智能体的持续训练与更新,系统可适应动态变化的环境条件,并基于实时信息做出资源分配决策。实验结果表明,所提方法在优化资源分配方面具有显著效果。Q学习智能体能够预测内存、带宽和处理器的最优配置值。这些预测不仅最小化资源消耗,同时满足雾计算系统的性能需求。实验验证表明,相较于本研究中评估的其他方法,该方法显著提升了平均任务处理效率。