Software engineers develop, fine-tune, and deploy deep learning (DL) models using a variety of development frameworks and runtime environments. DL model converters move models between frameworks and to runtime environments. Conversion errors compromise model quality and disrupt deployment. However, the failure characteristics of DL model converters are unknown, adding risk when using DL interoperability technologies. This paper analyzes failures in DL model converters. We survey software engineers about DL interoperability tools, use cases, and pain points (N=92). Then, we characterize failures in model converters associated with the main interoperability tool, ONNX (N=200 issues in PyTorch and TensorFlow). Finally, we formulate and test two hypotheses about structural causes for the failures we studied. We find that the node conversion stage of a model converter accounts for ~75% of the defects and 33% of reported failure are related to semantically incorrect models. The cause of semantically incorrect models is elusive, but models with behaviour inconsistencies share operator sequences. Our results motivate future research on making DL interoperability software simpler to maintain, extend, and validate. Research into behavioural tolerances and architectural coverage metrics could be fruitful.
翻译:软件开发工程师使用多种开发框架和运行时环境来开发、微调和部署深度学习(DL)模型。深度学习模型转换器负责在不同框架之间以及向运行时环境迁移模型。转换错误会损害模型质量并中断部署。然而,深度学习模型转换器的故障特征尚不明确,这增加了使用深度学习互操作性技术的风险。本文分析了深度学习模型转换器中的故障。我们调研了软件开发工程师关于深度学习互操作性工具、使用案例和痛点的情况(N=92)。随后,我们针对主要互操作性工具ONNX相关的模型转换器故障进行了特征分析(涉及PyTorch和TensorFlow的N=200个问题)。最后,我们针对所研究故障的结构性成因提出并检验了两个假设。我们发现,模型转换器的节点转换阶段约占缺陷的75%,而33%的已报告故障与语义不正确的模型相关。语义不正确模型的成因难以捉摸,但存在行为不一致的模型共享算子序列。我们的研究结果推动了未来关于使深度学习互操作性软件更易于维护、扩展和验证的研究。对行为容差和架构覆盖度指标的研究可能具有广阔前景。