DL frameworks are the basis of constructing all DL programs and models, and thus their bugs could lead to the unexpected behaviors of any DL program or model relying on them. Such a wide effect demonstrates the necessity and importance of guaranteeing DL frameworks' quality. Understanding the characteristics of DL framework bugs is a fundamental step for this quality assurance task, facilitating designing effective bug detection and debugging approaches. Hence, in this work we conduct the most large-scale study on 1,000 bugs from four popular and diverse DL frameworks (i.e., TensorFlow, PyTorch, MXNet, and DL4J). By analyzing the root causes and symptoms of DL framework bugs associated with 5 components decomposed from DL frameworks, as well as measuring test coverage achieved by three state-of-the-art testing techniques, we obtain 12 major findings for the comprehensive understanding of DL framework bugs and the current status of existing DL framework testing practice, and then provide a series of actionable guidelines for better DL framework bug detection and debugging. Finally, based on the guidelines, we design and implement a prototype DL-framework testing tool, called TenFuzz, which is evaluated to be effective and finds 3 unknown bugs on the latest TensorFlow framework in a preliminary study, indicating the significance of our guidelines.
翻译:深度学习框架是构建所有深度学习程序与模型的基础,因此其缺陷可能导致依赖这些框架的任何深度学习程序或模型出现意外行为。这种广泛的影响凸显了保障深度学习框架质量的必要性与重要性。理解深度学习框架缺陷的特征是此项质量保障任务的基础性步骤,有助于设计有效的缺陷检测与调试方法。为此,本研究对来自四个流行且多样化的深度学习框架(即TensorFlow、PyTorch、MXNet和DL4J)的1,000个缺陷进行了迄今最大规模的实证分析。通过分析与从深度学习框架分解出的5个组件相关的缺陷根本原因及症状,并评估三种前沿测试技术所达到的测试覆盖率,我们获得了12项主要发现,以全面理解深度学习框架缺陷及现有测试实践的现状,进而为改进深度学习框架缺陷检测与调试提供一系列可操作的指导原则。最后,基于这些指导原则,我们设计并实现了一个原型测试工具TenFuzz,初步评估表明该工具能有效检测缺陷,并在最新版TensorFlow框架中发现了3个未知缺陷,这印证了我们所提指导原则的重要意义。