Cloud computing infrastructures increasingly rely on geographically distributed data centers to meet the growing demand for low latency, high availability, and cost-efficient service delivery. In this context, load balancing plays a critical role in optimizing resource utilization while maintaining acceptable quality of service (QoS) under dynamic and heterogeneous workloads. This study presents a comprehensive performance and cost evaluation of three widely used load balancing strategies, namely Round Robin, Equally Spread Current Execution Load, and Throttled, within a multi data center cloud environment using the Cloud Analyst simulation framework. Multiple deployment scenarios are examined by varying data center locations, user base distribution, network latency, and workload intensity. Key performance metrics, including overall response time, data center processing time, request handling behavior, and operational cost such as virtual machine and data transfer costs, are analyzed across two strategy steps. The results indicate that while the Round Robin strategy achieves lower internal processing times, the Equally Spread and Throttled strategies provide improved workload stability and reduced peak response times under high demand conditions. Furthermore, distributing resources across multiple data centers significantly reduces user perceived latency and enhances system scalability, albeit with associated cost tradeoffs. The findings demonstrate that no single load balancing strategy is universally optimal; instead, performance and cost efficiency depend on workload characteristics, geographic distribution, and system objectives. This work offers practical insights for cloud service providers and system designers, emphasizing the importance of intelligent resource distribution and adaptive load balancing policies for sustainable and high-performance cloud infrastructures.
翻译:云计算基础设施日益依赖地理分布的数据中心,以满足对低延迟、高可用性和成本效益服务交付不断增长的需求。在此背景下,负载均衡在优化资源利用、同时在动态异构工作负载下维持可接受的服务质量方面发挥着关键作用。本研究使用Cloud Analyst仿真框架,对多数据中心云环境中三种广泛使用的负载均衡策略(即轮询调度、均衡扩展当前执行负载和节流策略)进行了全面的性能与成本评估。通过改变数据中心位置、用户群分布、网络延迟和工作负载强度,考察了多种部署场景。分析涵盖两个策略阶段的关键性能指标,包括总体响应时间、数据中心处理时间、请求处理行为以及虚拟机与数据传输等运营成本。结果表明,虽然轮询调度策略实现了较低的内部处理时间,但均衡扩展和节流策略在高需求条件下提供了更优的工作负载稳定性并降低了峰值响应时间。此外,跨多个数据中心分布资源显著降低了用户感知延迟并增强了系统可扩展性,尽管伴随相关的成本权衡。研究结果证明,不存在普遍最优的单一负载均衡策略;相反,性能与成本效率取决于工作负载特征、地理分布和系统目标。这项工作为云服务提供商和系统设计者提供了实用见解,强调了智能资源分配和自适应负载均衡策略对于可持续高性能云基础设施的重要性。