The allocation of computing tasks for networked distributed services poses a question to service providers on whether centralized allocation management be worth its cost. Existing analytical models were conceived for users accessing computing resources with practically indistinguishable (hence irrelevant for the allocation decision) delays, which is typical of services located in the same distant data center. However, with the rise of the edge-cloud continuum, a simple analysis of the sojourn time that computing tasks observe at the server misses the impact of diverse latency values imposed by server locations. We therefore study the optimization of computing task allocation with a new model that considers both distance of servers and sojourn time in servers. We derive exact algorithms to optimize the system and we show, through numerical analysis and real experiments, that differences in server location in the edge-cloud continuum cannot be neglected. By means of algorithmic game theory, we study the price of anarchy of a distributed implementation of the computing task allocation problem and unveil important practical properties such as the fact that the price of anarchy tends to be small -- except when the system is overloaded -- and its maximum can be computed with low complexity.
翻译:面向网络化分布式服务的计算任务分配给服务提供商提出了一个问题:集中式分配管理是否值得其成本。现有的分析模型假设用户访问计算资源的延迟几乎不可区分(因此对分配决策无关紧要),这通常适用于位于同一远程数据中心的服务。然而,随着边缘-云连续体的兴起,仅分析计算任务在服务器上的逗留时间会忽略服务器位置所施加的不同延迟值的影响。因此,我们提出了一种新模型来研究计算任务分配的优化问题,该模型同时考虑了服务器距离和服务器中的逗留时间。我们推导了优化系统的精确算法,并通过数值分析和真实实验表明,边缘-云连续体中服务器位置的差异不可忽略。借助算法博弈论,我们研究了计算任务分配问题分布式实现的无政府代价,并揭示了重要的实际性质,例如无政府代价通常较小(除非系统过载),且其最大值可通过低复杂度计算得到。