With the rapid development of Deep Learning, more and more applications on the cloud and edge tend to utilize large DNN (Deep Neural Network) models for improved task execution efficiency as well as decision-making quality. Due to memory constraints, models are commonly optimized using compression, pruning, and partitioning algorithms to become deployable onto resource-constrained devices. As the conditions in the computational platform change dynamically, the deployed optimization algorithms should accordingly adapt their solutions. To perform frequent evaluations of these solutions in a timely fashion, RMs (Regression Models) are commonly trained to predict the relevant solution quality metrics, such as the resulted DNN module inference latency, which is the focus of this paper. Existing prediction frameworks specify different RM training workflows, but none of them allow flexible configurations of the input parameters (e.g., batch size, device utilization rate) and of the selected RMs for different modules. In this paper, a deep learning module inference latency prediction framework is proposed, which i) hosts a set of customizable input parameters to train multiple different RMs per DNN module (e.g., convolutional layer) with self-generated datasets, and ii) automatically selects a set of trained RMs leading to the highest possible overall prediction accuracy, while keeping the prediction time / space consumption as low as possible. Furthermore, a new RM, namely MEDN (Multi-task Encoder-Decoder Network), is proposed as an alternative solution. Comprehensive experiment results show that MEDN is fast and lightweight, and capable of achieving the highest overall prediction accuracy and R-squared value. The Time/Space-efficient Auto-selection algorithm also manages to improve the overall accuracy by 2.5% and R-squared by 0.39%, compared to the MEDN single-selection scheme.
翻译:随着深度学习的快速发展,越来越多的云端和边缘应用倾向于利用大型深度神经网络模型以提升任务执行效率和决策质量。受限于内存约束,模型通常通过压缩、剪枝和划分等优化算法进行处理,以便部署到资源受限的设备上。由于计算平台的条件动态变化,已部署的优化算法应相应调整其解决方案。为了及时频繁地评估这些方案,通常训练回归模型来预测相关的解决方案质量指标,例如本文关注的深度学习模块推理延迟。现有预测框架规定了不同的回归模型训练流程,但均未允许灵活配置输入参数(如批大小、设备利用率)以及为不同模块选择不同的回归模型。本文提出一种深度学习模块推理延迟预测框架,该框架:i) 提供一组可定制的输入参数,通过自生成数据集为每个深度学习模块(如卷积层)训练多个不同的回归模型;ii) 自动选择一组训练好的回归模型,在尽可能降低预测时间/空间开销的同时,实现最高的整体预测精度。此外,本文提出一种新型回归模型——多任务编码器-解码器网络作为替代方案。综合实验结果表明,多任务编码器-解码器网络具有快速轻量的特性,并能实现最高的整体预测精度和决定系数。与多任务编码器-解码器网络单模型选择方案相比,本文提出的时间/空间高效自动选择算法将整体预测精度提升了2.5%,决定系数提高了0.39%。