Deep learning (DL) jobs use multi-dimensional parallelism, i.e. combining data, model, and pipeline parallelism, to use large GPU clusters efficiently. Long-running jobs may experience changes to their GPU allocation: (i) resource elasticity during training adds or removes GPUs; (ii) hardware maintenance may require redeployment on different GPUs; and (iii) GPU failures force jobs to run with fewer devices. Current DL frameworks tie jobs to a set of GPUs and thus lack support for these scenarios. In particular, they cannot change the multi-dimensional parallelism of an already-running job in an efficient and model-independent way. We describe Scalai, a state management library for DL systems that enables jobs to change their parallelism dynamically after the GPU allocation is updated at runtime. Scalai achieves this through a new abstraction, a parallelizable tensor collection (PTC), that externalizes the job state during training. After a GPU change, Scalai uses the PTC to transform the job state: the PTC repartitions the dataset state under data parallelism and exposes it to DL workers through a virtual file system; and the PTC obtains the model state as partitioned checkpoints and transforms them to reflect the new parallelization configuration. For efficiency, Scalai executes PTC transformations in parallel with minimum data movement between workers. Our experiments show that Scalai enables DL jobs to support dynamic parallelization with low overhead.
翻译:深度学习任务采用多维并行化(即结合数据并行、模型并行与流水线并行)以高效利用大规模GPU集群。长期运行的任务可能面临GPU分配的变化:(i)训练过程中的资源弹性会动态增减GPU;(ii)硬件维护可能需要在不同GPU上重新部署;(iii)GPU故障会迫使任务在更少设备上运行。现有深度学习框架将任务与固定GPU集合绑定,无法支持上述场景。尤其无法以高效且模型无关的方式改变已运行任务的多维并行配置。本文提出Scalai——一种面向深度学习系统的状态管理库,可在运行时GPU分配更新后动态调整任务并行化策略。Scalai通过新型抽象“可并行化张量集合”实现该功能,该集合将训练过程中的任务状态外部化。当GPU配置变更时,Scalai利用PTC转换任务状态:在数据并行下对数据集状态进行重分区,并通过虚拟文件系统向深度学习工作节点提供访问;同时将模型状态获取为分区检查点,并将其转换为反映新并行配置的形式。为提升效率,Scalai以并行方式执行PTC转换,并最小化工作节点间的数据迁移。实验表明,Scalai能以较低开销实现深度学习任务的动态并行化支持。