Edge computing operates between the cloud and end users and strives to provide low-latency computing services for simultaneous users. Redundant use of multiple edge nodes can reduce latency, as edge systems often operate in uncertain environments. However, since edge systems have limited computing and storage resources, directing more resources to some computing jobs will either block the execution of others or pass their execution to the cloud, thus increasing latency. This paper uses the average system computing time and blocking probability to evaluate edge system performance and analyzes the optimal resource allocation accordingly. We also propose blocking probability and average system time optimization algorithms. Simulation results show that both algorithms significantly outperform the benchmark for different service time distributions and show how the optimal replication factor changes with varying parameters of the system.
翻译:边缘计算在云端与终端用户之间运行,致力于为并发用户提供低延迟的计算服务。由于边缘系统常在不确定环境中运行,对多个边缘节点的冗余利用可降低延迟。然而,由于边缘系统的计算与存储资源有限,将更多资源分配给某些计算任务将阻塞其他任务的执行,或将其执行转至云端,从而增加延迟。本文采用平均系统计算时间与阻塞概率来评估边缘系统性能,并据此分析最优资源分配。我们还提出了阻塞概率与平均系统时间优化算法。仿真结果表明,两种算法在不同服务时间分布下均显著优于基准方案,并展示了最优复制因子如何随系统参数的变化而改变。