On High-Performance Computing (HPC) systems, several hyperparameter configurations can be evaluated in parallel to speed up the Hyperparameter Optimization (HPO) process. State-of-the-art HPO methods follow a bandit-based approach and build on top of successive halving, where the final performance of a combination is estimated based on a lower than fully trained fidelity performance metric and more promising combinations are assigned more resources over time. Frequently, the number of epochs is treated as a resource, letting more promising combinations train longer. Another option is to use the number of workers as a resource and directly allocate more workers to more promising configurations via data-parallel training. This article proposes a novel Resource-Adaptive Successive Doubling Algorithm (RASDA), which combines a resource-adaptive successive doubling scheme with the plain Asynchronous Successive Halving Algorithm (ASHA). Scalability of this approach is shown on up to 1,024 Graphics Processing Units (GPUs) on modern HPC systems. It is applied to different types of Neural Networks (NNs) and trained on large datasets from the Computer Vision (CV), Computational Fluid Dynamics (CFD), and Additive Manufacturing (AM) domains, where performing more than one full training run is usually infeasible. Empirical results show that RASDA outperforms ASHA by a factor of up to 1.9 with respect to the runtime. At the same time, the solution quality of final ASHA models is maintained or even surpassed by the implicit batch size scheduling of RASDA. With RASDA, systematic HPO is applied to a terabyte-scale scientific dataset for the first time in the literature, enabling efficient optimization of complex models on massive scientific data. The implementation of RASDA is available on https://github.com/olympiquemarcel/rasda
翻译:在高性能计算(HPC)系统中,可以并行评估多个超参数配置以加速超参数优化(HPO)过程。最先进的HPO方法遵循基于多臂赌博机的思路,并建立在逐次减半算法之上,即根据低于完全训练保真度的性能指标来估计配置组合的最终性能,并随时间推移为更有前景的组合分配更多计算资源。通常,训练轮次被视为一种资源,让更有前景的组合训练更长时间。另一种方案是将工作进程数量视为资源,并通过数据并行训练直接为更有前景的配置分配更多工作进程。本文提出了一种新颖的资源自适应逐次倍增算法(RASDA),该算法将资源自适应的逐次倍增方案与朴素的异步逐次减半算法(ASHA)相结合。该方法的可扩展性在现代HPC系统上通过多达1,024个图形处理单元(GPU)得到了验证。该算法被应用于不同类型的神经网络(NN),并在来自计算机视觉(CV)、计算流体动力学(CFD)和增材制造(AM)领域的大型数据集上进行训练,这些场景通常无法进行多次完整的训练运行。实验结果表明,在运行时间方面,RASDA的性能比ASHA高出最多1.9倍。同时,RASDA通过其隐式的批次大小调度机制,保持甚至超越了ASHA最终模型的解质量。利用RASDA,本文首次在文献中对太字节规模的科学数据集进行了系统化的HPO,实现了在庞大规模科学数据上对复杂模型的高效优化。RASDA的实现代码可在 https://github.com/olympiquemarcel/rasda 获取。