The ongoing convergence of HPC and cloud computing presents a fundamental challenge: HPC applications, designed for static and homogeneous supercomputers, are ill-suited for the dynamic, heterogeneous, and volatile nature of the cloud. Traditional parallel programming models like MPI struggle to leverage key cloud advantages, such as resource elasticity and low-cost spot instances, while also failing to address challenges like performance variability and processor heterogeneity. This paper demonstrates how the asynchronous, message-driven paradigm of the Charm++ parallel runtime system can bridge this gap. We present a set of tools and strategies that enable HPC applications to run efficiently and resiliently on dynamic cloud infrastructure across both CPU and GPU resources. Our work makes two key contributions. First, we demonstrate that rate-aware load balancing in Charm++ improves performance for applications running on heterogeneous CPU and GPU instances on the cloud. We further demonstrate how core Charm++ principles mitigate performance degradation from common cloud challenges like network contention and processor performance variability, which are exacerbated by the tightly coupled, globally synchronized nature of many science and engineering applications. Second, we extend an existing resource management framework to support GPU and CPU spot instances with minimal interruption overhead. Together, these contributions provide a robust framework for adapting HPC applications to achieve efficient, resilient, and cost-effective performance on the cloud.
翻译:高性能计算与云计算的持续融合提出了一个根本性挑战:为静态同构超级计算机设计的HPC应用程序,难以适应云环境动态、异构且不稳定的特性。传统的并行编程模型(如MPI)难以利用云的关键优势(如资源弹性和低成本竞价实例),同时也无法应对性能波动和处理器异构性等挑战。本文论证了Charm++并行运行时系统的异步消息驱动范式如何弥合这一鸿沟。我们提出了一套工具与策略,使HPC应用程序能够在动态云基础设施上跨CPU和GPU资源高效且鲁棒地运行。本研究的核心贡献包括两方面:首先,我们证明了Charm++中基于速率的负载均衡技术可提升应用程序在云中异构CPU与GPU实例上的运行性能。进一步地,我们展示了Charm++的核心设计原则如何缓解网络争用和处理器性能波动等常见云环境挑战导致的性能下降——这些问题因许多科学与工程应用紧密耦合、全局同步的特性而加剧。其次,我们扩展了现有资源管理框架,使其能以最小中断开销支持GPU和CPU竞价实例。这些贡献共同构成了一个稳健的框架,助力HPC应用程序在云环境中实现高效、鲁棒且经济高效的性能表现。