The surge in Internet of Things (IoT) devices and data generation highlights the limitations of traditional cloud computing in meeting demands for immediacy, Quality of Service, and location-aware services. Fog computing emerges as a solution, bringing computation, storage, and networking closer to data sources. This study explores the role of Deep Reinforcement Learning in enhancing fog computing's task offloading, aiming for operational efficiency and robust security. By reviewing current strategies and proposing future research directions, the paper shows the potential of Deep Reinforcement Learning in optimizing resource use, speeding up responses, and securing against vulnerabilities. It suggests advancing Deep Reinforcement Learning for fog computing, exploring blockchain for better security, and seeking energy-efficient models to improve the Internet of Things ecosystem. Incorporating artificial intelligence, our results indicate potential improvements in key metrics, such as task completion time, energy consumption, and security incident reduction. These findings provide a concrete foundation for future research and practical applications in optimizing fog computing architectures.
翻译:物联网(IoT)设备与数据生成的激增凸显了传统云计算在满足即时性、服务质量(QoS)和位置感知服务需求方面的局限性。雾计算作为一种解决方案应运而生,将计算、存储和网络资源更靠近数据源。本研究探讨了深度强化学习(Deep Reinforcement Learning)在增强雾计算任务卸载中的作用,旨在提升运行效率与安全性。通过回顾现有策略并提出未来研究方向,本文展示了深度强化学习在优化资源利用、加速响应速度以及防范安全漏洞方面的潜力。研究建议推进面向雾计算的深度强化学习技术,探索区块链以增强安全性,并寻求节能模型以改善物联网生态系统。结合人工智能技术,我们的结果表明关键指标(如任务完成时间、能耗和安全事件减少)存在显著提升潜力。这些发现为未来优化雾计算架构的研究与实际应用提供了具体基础。