With its elastic power and a pay-as-you-go cost model, the deployment of deep learning inference services (DLISs) on serverless platforms is emerging as a prevalent trend. However, the varying resource requirements of different layers in DL models hinder resource utilization and increase costs, when DLISs are deployed as a single function on serverless platforms. To tackle this problem, we propose a model partitioning framework called MOPAR. This work is based on the two resource usage patterns of DLISs: global differences and local similarity, due to the presence of resource dominant (RD) operators and layer stacking. Considering these patterns, MOPAR adopts a hybrid approach that initially divides the DL model vertically into multiple slices composed of similar layers to improve resource efficiency. Slices containing RD operators are further partitioned into multiple sub-slices, enabling parallel optimization to reduce inference latency. Moreover, MOPAR comprehensively employs data compression and share-memory techniques to offset the additional time introduced by communication between slices. We implement a prototype of MOPAR and evaluate its efficacy using four categories of 12 DL models on OpenFaaS and AWS Lambda. The experiment results show that MOPAR can improve the resource efficiency of DLISs by 27.62\% on average, while reducing latency by about 5.52\%. Furthermore, based on Lambda's pricing, the cost of running DLISs is reduced by about 2.58 $\times$ using MOPAR.
翻译:凭借弹性扩展能力和按需付费的成本模型,在无服务器平台上部署深度学习推理服务正成为主流趋势。然而,当深度学习推理服务以单一函数形式部署在无服务器平台上时,深度学习模型中不同层对资源需求的差异性会制约资源利用效率并增加成本。针对该问题,本文提出名为MOPAR的模型分割框架。该工作基于深度学习推理服务的两类资源使用模式:全局差异性与局部相似性,这两种模式源于资源主导算子和层堆叠结构的存在。基于这些模式,MOPAR采用混合方法:首先将深度学习模型垂直分割为多个由相似层构成的切片以提升资源效率;包含资源主导算子的切片进一步划分为多个子切片,通过并行优化降低推理延迟。此外,MOPAR综合运用数据压缩和共享内存技术,以抵消切片间通信带来的额外时间开销。我们实现了MOPAR原型系统,并在OpenFaaS和AWS Lambda平台上通过四类共12个深度学习模型评估其效能。实验结果表明,MOPAR平均可提升深度学习推理服务资源效率27.62%,同时降低延迟约5.52%。基于Lambda定价模型,使用MOPAR可将深度学习推理服务的运行成本降低约2.58倍。